All episodes
Elton Stoneman
Episode 13 - December 10, 2019 - 26:40 · Talk

How to Easily Modernize Older Applications With Docker

Featuring Elton Stoneman, architect and author
Apple Podcasts Spotify Youtube

If you’re looking to level-up your Docker game or to get started using containers, you’re in for a treat. This week, I had the pleasure of chatting with Docker architect and author Elton Stoneman about:

  • His upcoming book for beginners, Learn Docker in a Month of Lunches
  • How enterprises can modernize their code without complete re-writes
  • What a typical Docker learning path looks like for beginners

Elton is a Docker enthusiast and expert who has been a Microsoft MVP since 2010 and a Pluralsight author since 2013. When he’s not writing books about Docker or speaking at industry conferences, you’ll probably find him sharing insights about Docker and .NET on Twitter.

Listen to our entire conversation below, and check out my favorite parts in the episode highlights!

You can also get Semaphore Uncut on Apple Podcasts, Spotify, Google Podcasts, Stitcher, and more.

Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️ review on the podcast player of your choice and share it with your friends!

Highlights from this Episode

Darko Fabijan: (00:02) Hello and welcome to Semaphore Uncut, a podcast where we talk about continuous integration, continuous delivery, and generally developer experience and technologies.

Today, we have Elton Stoneman with us. Elton, thank you for joining us.

Elton Stoneman: Thank you very much for having me. I was listening to your podcast episode with Bret Fisher, and you covered some container topics. I know you guys are big into containers.

So, my name is Elton. I’ve been working with Docker for over three years now. I’m an architect in the partnerships team, and I work with companies like Microsoft, GitHub, and AWS. On a technical level, I show them what we’ve got coming through in the products and what they’ve got coming through and we work out some nice development stories.

And before I joined Docker, I was a .NET consultant for most of my career. I was building big, ugly, monolithic applications that I now spend time teaching people how to break up and move to the cloud.

I joined Docker because Windows containers were a new thing at the time and I was using Docker in Linux on a project back in 2015. I was really interested in seeing what was going to happen in the Windows world. Then, when Windows containers came on board, I joined Docker and part of my job now is spreading the word to the Windows and .NET community about what all this cool new stuff is.

Elton’s upcoming Docker book for beginners

Darko: (01:16) Great, great. And part of your job is also teaching?

Elton: Absolutely. I’m often at conferences speaking about what things are going on in Docker and what patterns or practices look like and what people should be considering for their architecture.

But I’m also a Pluralsight author, so I’ve got a stack of Pluralsight courses, which is an online video training company, and I’m a book author. I wrote the book Docker on Windows which tells you what it’s about, just in the title. And I’m in the process of writing another book now, which is Learn Docker in a Month of Lunches.


Semaphore’s note: We can’t wait for Elton’s book to be released this coming year! Use the discount code podsemaphore19 when you purchase the book in any format.


It’s quite interesting because I’m sure you were in a similar position. You’ve been using containers for a very long time and there’s a point where you just take it for granted. You know this is how it all works and that it unlocks all these great capabilities. There are a ton of people who are still really new on this.

So the new book that I’m writing is very much a step-by-step guide. It takes you from the beginning through some quite advanced stuff, but it’s for people who are just starting out with containers.

The beauty of using Docker on Windows

Darko: You said that you spent a number of years building monolithic applications on Windows, and I’m sure that it’s not as bad as you described it! But I get the idea.

As Docker initially started building on years and years of development in Linux Kernel and all that, Windows was—I say—joining the party. Can you give us a bit of the history of how it came about, and what the current state of Docker on Windows was?

Elton: Sure. Like I said, the development was really interesting because Docker came along and took these primitives that have been in Linux for a long time that let you take a set of processes and put a thin boundary around them and call them a container. That had been in Linux for a very long time and Docker just came along and made it super easy.

And then the opposite happened in Windows. So, when Docker was becoming really popular, we were working with the Windows server team and they were really keen to bring that experience to Windows. Docker came to Linux and brought the simple developer user experience to containers. And the other way around happened in Windows. So they wanted to start with that user experience to keep it the same for Windows containers as it was for Linux containers.

But then they have to go back and put those sorts of primitives that Linux had into Windows because the Windows never had that stuff. That’s not that idea of namespaces and C groups. It didn’t have anything like that. It still doesn’t have quite the same things, so actually the internals of how Windows server container works are slightly different, but as far as Docker’s concerned, it’s the same. It’s the same set of artifacts.

So you have Dockerfile to package up your application, that produces a Docker image, and you run it inside a container. Then you share those images on Docker Hub or whatever registry you’re using.

That API is the same. The Docker command line is the same. A huge advantage is you can take these older applications that are built for Windows, and you start bringing them into the modern world.

What you end up with is a really consistent set of tools and processes throughout your whole stack because everything has a Dockerfile, everything builds an image, every image gets security scanned and signed or whatever your pipeline is.

Ultimately, you deploy it with Docker compose in your test environments and Kubernetes in your production environments or whatever you’re doing, but it’s the same set of artifacts everywhere, whether it’s your 10-year-old Windows application or your brand new application. So, it just simplifies lots of teams.

[Dig deeper: A Step-by-Step Guide to Continuous Deployment on Kubernetes]

Darko: (04:45) That’s very nice. I don’t remember that there was ever a time that something worked on Linux and Windows in the same format. Never.

Elton: Well it’s one of the things I’m trying to do with this book at the moment because you can package up your Docker images so that they run as Windows containers on Windows or Linux containers on Linux. What I’ve tried to do is make every example in this book the same everywhere.

As a reader, I could follow the code samples in the book and do a Docker container run. And if I’m running on a Raspberry PI, it’ll pull down a Docker image and run it on Linux, and if I’m running on Windows 10, it’ll pull down a Windows image and run it on my Windows machine.

That’s super easy from the user point-of-view, but it’s much harder for me as the author of those images. I need to make sure that little things like the commands are different between Windows command and Bash shell, but I need to make sure that they’re portable.

It puts the onus on the image author, the person who’s publishing the applications, to make sure that they work in the same way everywhere. But you know, it makes it hard for one person and easy for 10,000 people who use the images.

Actually, the process isolation part of it, as I understand it wasn’t so complicated because there is already isolation at different levels within the kernel of Windows and they just had another way of expressing that. The networking piece was a lot harder I think.

So, the Linux networking stack is infinitely complex and pluggable and the Windows networking stack was a lot more straightforward. But, Docker took advantage of all those weird and wonderful parts of the Linux networking piece. A lot of that has to come into Windows.

When Windows containers were first supported—and like came out in 2016—you could join a Windows node to a Docker swarm, but it wasn’t a full part of the swarm. The clustering technology wasn’t quite there because Windows nodes couldn’t take part in the distributed network. So I couldn’t have Linux containers talking to Windows containers on the same cluster, and it took the first rollup patch release of Windows server to add that functionality.

It was in the service pack 1 type release of Windows server 2016 that added those networking features that then made Windows nodes in the swarm the same level of capability as the Linux nodes. Actually, the same thing has been happening now with Kubernetes. Windows support for Kubernetes came in in alpha a couple of releases ago, and then it graduated to beta, and then it’s finally gone into GA release now.

A lot of the things that have been fixed over time has to do with networking and making sure that pods can communicate with each other—whether it’s a Windows pod on the same machine, a Windows pod on a different machine or a Linux pod on a different machine. So, the networking stack was the most kind of complex set of problems to solve.

Running Linux and Windows Applications Side by Side

Darko: (08:57) That’s great. From your experience with working with people and with Docker—which is touching both Linux and Windows—what are some of the usual practices of using it? Are people running on the same Kubernetes cluster? Do they have Windows and Linux applications side by side—some being Windows, and some Linux?

Elton: Yeah. Yeah, so that’s pretty new. So the cloud providers and the kind of on-premise Kubernetes providers are either in the process of adding Windows support or it’s just gone GA. So as you will have Azure Kubernetes Service (AKS) stat managed Kubernetes service, they all have Windows nodes in various levels of support. Some of them are full GA and some are currently in preview.

Similarly, with the on-premise ones—the major providers, Docker enterprise, and the likes of Rancher—they’ve got Windows support for their Kubernetes nodes now. The kind of space that I work in is talking to companies who have a similar history to me. They have a history of Windows applications and .NET applications. They want to move them either to the cloud or they want to break them up into smaller pieces and they want containers to be part of that journey.

The pattern that is emerging is you can take your old monolith and run it up into a Docker image fairly easily. You can package that up to running Kubernetes fairly easily, and you can push it onto AKS, and you’ll be up and running in a week. But you haven’t got a cloud-native application, you’ve just got your whole monolith running in Kubernetes in the cloud. Then you can start breaking it up and as you’re working on features, you’re likely going to split those into different containers.

If you’re in the .NET world, your old application will be a full .NET framework in Windows pods, but the new features may very well be .NET core apps running in Linux pods and gradually. I think what we’ll see is that if I’m predominantly a Windows shop, I’m going to start off with Kubernetes cluster, that is 90% Windows nodes and 10% Linux nodes. Gradually, as my apps evolve, I’m going to shrink down my Window’s state and scale up my Linux’s state, for reasons of cost and efficiency and all that sort of stuff.

Ultimately, I may just have a couple of Windows nodes that are running those old applications that don’t justify being rewritten or being broken up. Just leave them as they are, and the rest is migrating to cross-platform stuff that can run on Linux.

What microservice design can offer companies

Darko: (11:01) Yeah, I mean obviously in 2014-2015, there were early adopters of Docker. For me as a relatively young developer, it’s surprising how big companies are very keen on adopting Docker and Kubernetes.

From what you described, it makes lots of sense because those monolithic applications that you are mentioning… There are tens of thousands of hours and billions of dollars potentially invested in some of those, but you need to move forward. You can’t rewrite that.

Elton: Yeah, exactly. If I’m doing a workshop with companies who are in that position of having these big monolithic apps, they understand the advantages. They understand where they want to get to, which is certain features need a much higher release cadence because we want to get new features out quickly.

Certain features are brittle. We want to make sure that we don’t release them as part of some other release. All the things that you get with a kind of microservice architecture, but they don’t want to take their old application, stop development for 18 months and completely rewrite it because there’s very little business value in that.

The approach that we work through when we’re looking at this sort of thing is to take those known pain points. The big advantage of having a big monolithic application that’s a complete nightmare to work with is that you understand why it’s a nightmare. You understand the bits that are difficult.

You can start to carve those out. For us, release might be 90% of the code is still in that monolith, but one feature that needs rapid development has come out into a separate container. The next time I do a release of that feature, I leave the monolith as it is. I’m not going to do an update of those pods or those containers. So I don’t need to do my two-week regression test cycle. I just test the new things that I’m deploying which might take a day or two and I can get a release out really quickly

Gradually, you take the important parts of your application or the parts that make it difficult to maintain and bring those into separate features. Gradually, you’re realizing the benefits of the modern approaches without a big rewrite project because those big rewrite projects are lengthy and incredibly risky.

Darko: Yeah. You describe it in such a way that it sounds so obvious, but I was not thinking about it in such a straightforward way that just the release cycle can be very different. If you take into account some regulated industries, maybe part of your application, you really don’t want to touch them for many months, while on the other parts you can just run freely.

Elton: I mean, there were some good indicators of the pieces that I pull out of my monolithic application. One is the things that I need to change regularly, but like you say, the other part is the things that don’t need to change very often.

One of my consultant gigs was an investment bank, and we had a third party service that we consumed. It was really complicated, nasty code because their API wasn’t very friendly and they only ever changed their API once a year. But every time we did a release of the software, which was only sort of five or six times a year, we had to make sure we regression tested all that component.

We had a huge speed test for it because if it failed, things were catastrophically bad, but if we get to pull that out into its own feature, then we would only release that once a year when their API changed. We were to save all that testing time and all that risk. There are some nice approaches that this stuff just makes it easy to do.

Elton’s Advice to Docker and Kubernetes beginners

Darko: (14:05) If someone is completely new to Docker and Kubernetes, what advice would you give them? What’s the best way to get started?

Elton: Well, the learning path really starts by running a container. So get your head around the concept of a Dockerfile, which is just a script that packages up an application, building an image, which is really just like a big zip file that contains your entire application. You can share that around and running single containers and getting comfortable with the Docker commands and the Dockerfile syntax.

And that stuff is actually pretty simple. I mean for the Dockerfiles syntax, you only need to learn four or five commands, really. For anything complicated that you need to do to set up your application, inside of the Dockerfile, you’ll be using Bash scripts or PowerShell scripts anyway. So you’re going to use some of the skills you’ve already got to package up your application. And for the Docker commands, it’s a Docker run to start your application. You can publish the ports to send traffic into your containers. There is a fairly streamlined set of things that you learn.

Moving on to multi-container applications

And the next stage is multi-container applications. So whether you’ve got an application that has an API and a website—or an API, a website, and a database—and add in a message queue. Then you learn Docker Compose, which is how you describe a distributed application. And again, it’s a new thing that you have to learn, but it’s fairly straightforward.

The Docker file is a script that describes packaging up one part of your app, and the Docker Compose file describes the structure of your application of all the different parts. So, you have your API container and your web container, and then you get to feel what it’s like to deal with distributed applications in containers.

That’s often where people start to really click how valuable this stuff is. Because when you start to do this as a new starter on a project, you’re going to browse to the GitHub page, clone the code, and you’re going to run docker-compose up and that’s it. The whole app will be running on your machine, the same way that it runs in the test environment and the continuous integration environment and everywhere else because it’s all been codified in these little script files on the Docker Compose file.

Choosing an orchestration system

And then, that’s when you realize, “I need to choose an orchestrator.” Kubernetes is the default because there are managed services everywhere, but Docker Swarm is an alternative. It’s worth looking into Docker Swarm because this part of Docker, so it’s easy to get up and running and it’s much simpler to work with than Kubernetes because it uses that same Docker Compose structure. So I can take my Docker Compose file that I’m comfortable with, that I use in my Dev environment and I can use the same thing or a modified version of that thing to deploy to production.

And then if you’ve got comfortable with Docker Swarm, then learning Kubernetes is easier because some of the concepts come with you. So you know, the concept of a cluster… Now I’ve got a whole bunch of machines that will run Docker, I don’t start individual containers, I take my application description, which is this YAML file, and I give it to the cluster and it works out where to run containers.

Fundamentally, all the orchestrators do the same thing, but there are levels of complexity that you can dig into and you can spend months learning Kubernetes. So I think the important thing here is that there are some key technologies, so you’re going to learn Docker because no matter what your end goal is, the containers are how all this stuff starts. You’re probably going to learn Compose because that’s how you describe your application. And then you’re either going to go with the Swarm route or Nomad by HashiCorp is another container orchestrator or you’re going to go for Kubernetes.

Choose your own Docker adventure

But it’s not a fixed journey that you have to go all the way to the end and become a Kubernetes expert before you take this stuff into production. When you’re happy with Docker Compose, you can do that. You can run single containers on single servers if that’s going to get you some value because if you’re currently running every app on a virtual machine (VM), and those VMs are running old unsupported versions of the operating system, then moving to containers and running Docker Compose on your server is a big step forward. It’s not microservices and it’s not highly available and it’s not super scalable because you need a cluster for that, but it’s still better than you’ve got today.

So the journey is typically Docker, Compose, and then Swarm or Kubernetes, but you don’t have to go all the way. You can stop where it makes sense and then move on to the next step when you’re ready.

What’s next at Docker for ARM and IoT

Darko: (18:05) Docker Compose was a surprise for me because, for a developer’s day-to-day life, it makes it so much better. You don’t have to worry about how to install a very specific version of Redis or Postgres or how to do the networking stuff. But that’s a couple of lines of that Docker compose file and it just works.

When we were preparing for this episode, you mentioned something interesting—which I don’t know much about—and that’s the cross-platform ARM images. Can you get us up to speed on where it’s headed?

Elton: One of our partners at Docker is ARM. They don’t make chips, but they make chip blueprints. So, practically every mobile phone in the world has ARM processor. They’re probably best known on the desktop with the Raspberry Pi.

The processors are super-efficient, and they run lightweight in terms of the energy used that they require and the heat they produce. There’s a move toward bringing these things into the data center.

So, suddenly there’s always different devices that are either IoT devices, H devices, or data center devices that are running ARM CPUs and that CPU instruction set is completely different from the Intel instruction set that everyone’s been using for the last 20 years. You need to rebuild your apps to run on ARM and not every application platform works on ARM, and most of the modern ones do.

Anything like Node.js, Go, .NET Core, Java—obviously, they all work just fine on ARM but the developer experience is pretty bad. Typically, you’re going to connect somehow to your Raspberry PI, and you’re either going to have a USB cable to send the code down there, and then you’ll log on to the Raspberry PI and do a build (and that takes forever because the CPUs are fine for running apps, but compiling is quite an intensive process). Next, you need to find a way to ship that application and it’s just a difficult experience.

But then Docker started working with ARM and realized that the artifacts that we have to build your application, like the Docker file, let you be cross-platform. If you’re new to all this, then your Docker file is how you package up your image, but you can also compile your code inside your Docker file.

So, you don’t need to have a software development kit (SDK) with a Go compiler or the .NET compiler or anything like that on your machine; that can come in a container. That container can compile the code for you and produce the output before packaging it up to run in another container. And that means it can work across any platform.

So, if I’m running on Windows, then when I’m building my code, it’s going to build using the Windows compiler. If I’m running on a Raspberry PI, it can build using the ARM compiler and the output I get will be a Windows version on my Windows machine ARM an ARM version on my Raspberry PI.

But what Docker hub lets you do and what the Docker registry, which is how you share these things, lets you share an image in such a way that it has a single name. So I’d have my application called 6i/myapp. 6i is my username on the hub, but that’s like an umbrella name. And underneath that, there are different images for different architectures. So when I do docker run 6i/myapp, if I’m running on Windows 10, I’m going to get the Windows 10 version, and that will run as a Windows container. If I’m running on a Raspberry PI, I’ll get the ARM version, and it’ll run my app as the Linux container. But Docker takes care of all that stuff for you.

As part of the image metadata that goes and lives in the registry, it contains the operating system and the CPU architecture. And when you’re running your Docker engine, it knows your local operating system and CPU architecture. So when you pull an image or when you try and run a container and it pulls an image for you, it’ll pull the one that matches, so it just takes care of all the complexity for you.

Again, as the image author, you need to be aware that there are differences between some of those platforms and you need to allow for that. But ultimately you’re publishing one thing and you’re letting people consume it in different ways.

So, it becomes super simple. We did a demo at DockerCon, which is the big Docker conference. We had a Dockerfile to build this Java application and run it, and we were running it on Intel machines in AWS, and then we took the exact same Dockerfile, we spun up an ARM virtual machine in AWS. They have these instances called A1, which our running ARM CPUs. They’re about half the price of the Intel equivalent.

We took that exact same Docker file, and we built it on that A1 machine and ran it on that server, and we didn’t have to change a single line of code or a single bit of the Dockerfile. We just had our app running at half the price that we were previously paying. So you need to have your application that will run on any architecture, so it needs to be certain platforms that support that.

But if you get that, it’s a super good way of doing these things and that’s just the data center use case because, of course, if you’re building something for an IoT device, one of the biggest problems is how you ship software out, how you can reliably get the new update without having to restart the machine. If your application is running in a container, and you’re using Docker Hub to distribute all that stuff taken care of for you. So there are some really nice use cases around what you can do with ARM.

What’s Next for Docker?

Darko: (23:20) Is there anything that you’re looking forward to next year in terms of Docker. What are some features that you’re looking forward to?

Elton: We’ve got some new features that are really cool in Docker desktop, which I’ve been working with lately. We have this thing called Templates, and a Template is the notion that I can go into Docker desktop and I can use a Template to bootstrap a new application.

So, I can pick a Template that’s got a .NET core web app and a Go Rest API, and it uses Postgres and Redis, and I click a button, and it spins up all the stuff for me. It spins up some demo code plus the Dockerfiles, plus the Docker Compose file, and I can run that stuff up and see it all working locally.

We’ve had those Templates for a little while, actually. But to generate the Template on your machine, you can do anything because those Templates run inside containers. So, in theory, you can put anything in that type of templates.

And I’ve been working with stuff like GitHub Actions so that when you spin out your template, not only can you run everything locally, but you can also push it to GitHub, and it will create a Kubernetes.

Plus, there is your Postgres database. It gets the connection string from Postgres and creates it as a secret in AKS, it deploys your application, just does everything for you. And again, there’s a certain amount of work that the template author has to do to get that experience, but the user is just, you’re literally clicking, putting in some details and you get all this stuff for you.

A lot of people are very interested in that, ranging from people who do the kind of job that I do (which is going and showing people how your applications work because it’s really easy to wrap up a demo and show it to people) all the way through to architects in big enterprises who want to take all their best practices and put them into a template that’s reusable, easy to discover and that they own.

And that’s the ongoing ubiquity of these things. So you know, everything runs in containers—your GitHub Actions that I mentioned, your DevOps, all kinds of continuous integration things—they all run in containers. So you take your tools and wrap them up and they run them anywhere.

And then the ubiquity we’re seeing around Kubernetes, you can run Kubernetes through Docker desktop on your Windows or Mac machine and it’s a real Kubernetes single node cluster. And we’re just seeing that stuff everywhere.

If you look at the Cloud Native Computing Foundation, they have this landscape where they plot out all the interesting technologies in the space, and there are hundreds, if not thousands of things there. Actually, they’re all powered by the fact that you have Docker containers, you have a cluster that can run everything in high availability and scale with Kubernetes. If you’re at the beginning of that journey, it is a learning path and there is complexity that you have to get your head around with those things. But once you’ve learned Kubernetes, everything else uses that stuff anyway.

Darko: (26:30) Thank you very much for your time. And it was a pleasure talking to you.

Elton: And you!


Enjoy this interview? Listen to more Semaphore Uncut episodes:

Meet the host

Darko Fabijan

Darko, co-founder of Semaphore, enjoys breaking new ground and exploring tools and ideas that improve developer lives. He enjoys finding the best technical solutions with his engineering team at Semaphore. In his spare time, you’ll find him cooking, hiking and gardening indoors.