All episodes
Bret Fisher
Episode 9 · Jul 23, 2019 · 43:15 · Talk

Bret Fisher on Speeding Up Your DevOps Workflows with Docker

Featuring Bret Fisher, DevOps Trainer and Consultant
Apple Podcasts Google Podcasts Spotify Youtube

In this podcast, Semaphore’s co-founder Darko Fabijan discusses the themes surrounding impact-driven software, and how we can support a culture of innovation in the world’s leading software companies.

For 25 years Bret has built and operated distributed systems as a sysadmin and helped over 100,000 people learn dev and ops topics. He is a Docker Captain, the author of the wildly popular Docker Mastery series on Udemy, and also provides DevOps-style consulting and live workshops with a focus on immutable infrastructures, containers, and orchestration.

He spends his free time maintaining open source and leading Virginia’s local, thriving tech scene. Bret basically spends his days helping people, and giving high fives. He lives at the beach, writes at bretfisher.com, prefers dogs over cats, and tweets at @BretFisher.

After listening, connect with Bret (@BretFisher) and me (@darkofabijan) on Twitter, and subscribe to Bret’s Docker and DevOps YouTube channel.

Watch this Episode on Youtube

Complete transcript of the episode

Darko Fabijan: (00:16) Hello and welcome to Semaphore Uncut, a show where we talk about products, technologies and the people behind communities and products. My name is Darko. I’m a co-founder of Semaphore, and I will be your host.

Today, we have Bret Fisher with us. Thank you, Bret, for joining us. Feel free to introduce yourself.

Bret Fisher: My name is Bret Fisher. I’m a DevOps consultant and author and instructor and I teach Docker.

Why you should incorporate containers into DevOps workflows

Bret: (07:27) The larger question for businesses, for IT managers, development team managers that are trying to figure out “Should we do containers?” For them, a lot of it’s looking at the pain points in their organization.

Nowadays, when we’re all focused on DevOps style workflows, it’s really about what is taking the longest. Is it the local development experience, and that’s a real pain for our team?

Maybe it’s onboarding a new developer that’s a real pain. Maybe the continuous integration system is slow and we need to look at a new one, or we need to figure out how to make it faster. Maybe it’s really complicated so we have multiple people that have to manage that solution. Or maybe it’s just getting updates into production and not screwing everything up or kicking off users and killing connections. How do we do that if we need to on a daily basis?

Once you start looking at those three areas, local development, the CI/CD, and then the production, you can begin to figure out what problem you want to solve first? Obviously, at this point, containers really streamline that whole process if you fully adopt them in every area of your workflow, but you’ve got to pick one to attack first. You’ve got to pick one to go after.

A lot of teams focus on just learning how to develop locally in Docker and then run tests and things like that locally in Docker on their machine using something like Docker Compose. That’s a pretty common thing for developers to go after.

But at the same time, I work with teams that are just getting into containers, and their biggest push is to streamline their testing in CI and their deployment out of that CI into CD solutions. They’re not necessarily motivated by local development streamlining. They’re more focused on what happens once it leaves their desktop computer, once it leaves their developer’s machine and it’s in a Git repo or something, etc. They want to make that whole process faster and better.

The importance of using Docker for CI/CD

Darko: (10:12) Okay. Thanks for sharing that. I have a followup question in that area. You mentioned the development environment, CI/CD environment, production environment.

Let me just add that over the years, one of the biggest struggles of running a continuous integration is the difference between your development environment and production. When you hear that mismatch of one library which might be a different version, that could cause endless hours of debugging. It works off a local machine. It doesn’t work in the CI. Containers are definitely solving that.

In the developer happiness area and embracing the Docker, is the main thing being able to deploy and know it was deployed? Or is that local development environment experience also something that for some people it’s more important?

Bret: Yeah. I think it depends on the person. If you’re a developer that’s just local and your job ends at doing a git push to a repo and there’s someone else managing CI/CD and there’s someone else managing production and you’re not having to do all the different jobs, I think that local developer happiness is a big issue.

One of the key things nowadays that for local development, where Docker really saves you so much time, is if you’re expanding your microservices rollout. You’re ending up with possibly a dozen different codebases running separately on your local machine. It’s pretty hard to do that without containers. It requires a lot of maintenance and scripting and automation and Vagrant and all these different things before we really had containers.

But now, all the setups that I see nowadays where people are focused on microservice development locally, they’re using Docker, Docker Compose, Docker Desktop.

They’re using that Docker Compose YAML file as a way to script out the dozens of different containers that all need to run at the same time, so that they can then develop on one, use that API, maybe they’re building out of an API microservice framework and they need a bunch of other things running on their local machine just so that they can start developing.

I don’t think we would have a discussion around microservices today if we didn’t have containers. The tooling around managing a whole bunch of different things running together and seamlessly talking to each other over networking, using TCP/IP as the backbone for how your app talks, I think containers are really at this point the only feasible way of getting that done.

If you’re someone who has to care about more of the pipeline, if you’re someone who also has to manage the CI/CD, or if you’re in the CI solution every day, or if you have to actually care whether or not the updates to the server or maybe even the person who’s also deploying it to the server, if you’re doing all the jobs, containers really reduce the level of complexity.

If you’re having to manage all those different parts, there are so many different tools and scripts and commands that you have to learn if you don’t have containers.

Now that we have containers, obviously, we’re always going to have a little bit of chaos. But I think that being able to know that the libraries I have locally are exactly the same ones that I have on the server, that the codebase is exactly the same, I think that has allowed us essentially to go faster. That’s going back to that same theme of speed.

Now developers can stop worrying about every single commit and managing that through the pipeline. Now they can just keep developing, pull requesting, focusing on that workflow, and let the rest of it automated.

Changing roles in DevOps

Darko: (14:40) You mentioned DevOps. At some point, we pushed all the people who manage servers to learn to code to some extent and to have that infrastructure described in code. Roles in IT are constantly changing. Are you seeing any changes in the roles, and maybe their responsibilities?

Bret: Yeah. I think that there’s a great discussion continually going on in the community about, do we all really need to know more things? I largely think that we are, overall, we’re all just human and we have a limited capacity for memory and learning and all that.

I think our roles are shifting. We don’t know every line of code in the apps we’re running. We’re all using frameworks now. Unless you’re sending a spaceship to Mars, you don’t have to know every byte of your code.

You can now abstract that out and just say, I don’t need to know the web frameworks. I don’t have to build the webserver. I don’t have to build the datetime library that manages datetimes for my app. That’s someone else’s job.

The good news is we don’t have to care about that as much and we can focus on new things like DevOps, and maybe a little bit about server deployments, and a little bit about monitoring and observability.

The negative of that is that we’re running a lot of code that we don’t know. When things start to go wrong, that’s I think sometimes what really separates the senior engineer from the junior engineer, is that both can write code, both can deploy apps.

But the senior engineer is probably going to have all that experience that you just can’t really teach. They’re going to be like, “Well, okay. This might be something related to network firewalls or network address translation,” or something that’s maybe at a lower level that, maybe, a junior developer never had to care about before.

Darko: (42:53) If you haven’t already, check out Bret’s course and workshops. We are going to share the link also to his YouTube channel in our description. And if you haven’t already, please subscribe to our channel.

Meet the host

Darko Fabijan

Darko, co-founder of Semaphore, enjoys breaking new ground and exploring tools and ideas that improve developer lives. He enjoys finding the best technical solutions with his engineering team at Semaphore. In his spare time, you’ll find him cooking, hiking and gardening indoors.