3 Mar 2022 · Software Engineering

    Why a Well-Oiled CI/CD Pipeline Makes for a Happy DevOps Team

    12 min read
    Contents

    JJ Asghar’s role as a developer advocate for IBM has immersed him in helping organizations make the transition to cloud native ecosystems and to IBM Cloud’s Kubernetes Service. Previously, Asghar was the OpenStack go-to-guy at Chef Software.

    In this interview, Asghar has drawn from his deep well of hands-on and high-level experience to describe the trials and tribulations organizations face when adopting cloud native platforms. He also discusses when using Istio’s service mesh for Kubernetes makes sense and the immense benefits a well-run CI/CD pipeline offers enterprises.

    I wonder if you could speak from your experience about what is the biggest push factor for enterprises to adopt cloud native and serverless platforms?

    It boils down to whether there are a lot of applications out there still running Java. There are Java apps still running in the world. Tomcat and Jbox were very successful because you could just take a WAR file and dump it into Tomcat. Over time, this new generation coming out don’t want to use Java. They want to use Ruby/Python/Node/Go and Kubernetes for the enterprise is now becoming a unified control plane or application layer for anything. You can run Java inside of there, you can run a Python app inside of there, and it’s the same technology that the Java people used to do with Tomcat, but now fully across every single language. It’s extremely powerful because now you don’t have to spend all this time having multiple infrastructures to run your applications — you just have Kubernetes run everything across the board.

    Does it seem that enterprises notice that Java is a bit out of date?

    Not necessarily. There is actually amazing progress with the newer versions of Java. There is a version of Java out there that has been compiled down and run on Knative to the point where it is ridiculously fast and easy to use. I believe that at DevNexus, which is a conference coming up in a month or so, there’s going to be a really big push about showing it off. Java is not dead; it’s the workhorse of the enterprise.

    I work at Semaphore where we often emphasize the mantra “optimize for happiness,” and we hear from our clients about how a flexible CI/CD pipeline improves the lives of whole teams and is an actual enabler.  Could you describe why having a well-oiled CI/CD system in place that crucial to implementing and maintaining enterprise-class software?

    That is also a very very important question that I used to hear multiple times in previous lives in different companies I worked at. The beauty of a good CI/CD pipeline is that it is a comfort blanket. It is something that if you’re moving towards cloud native, container-based, or just using configuration management, CI/CI is your comfort blanket. It is the thing you can trust that will always do the right thing, and it’s not human — the robots are doing the work.

    Unfortunately, to get to a really successful CI/CD pipeline you can’t just drop on it, as you probably know from Semaphore. You need to do it piecemeal; you need to learn to trust the software to do the work and you need to start very simple.

    One of the best stories I have from teaching CI/CD to someone was basically with Python. There are a lot of Python developers, and pep8y implementation is pretty standard across the board. Well, the first thing you should do for your CI/CD pipeline for any Python project is to have pep8 run against it before it gets merged in the master — just to make sure that you’re doing exactly what you expect. Then you add your unit and integration tests. You keep moving forward, and then with that, you’ll be able to have that full pipeline, and before you know it, every time you’ve seen a new PR you see the little green check, and you have confidence that it can go into master and not blow up the world.

    But how are enterprises doing CI/CD in general?

    In general, it’s the same conversation, but it’s much slower. With enterprises, you have so many people touching one single codebase or multiple codebases at different levels of CI/CD pipelines, and there are so many different products out there that do this kind of work. But the beauty of it is that you can find what will fit your company portfolio, and you’ll be able to mold it to fit your workflow; or you buy an opinionated one and say, “hey, this is the way I wanted it. I know that this opinionated workflow succeeds because of X Y and Z” or “I need one that is completely flexible, where I can do literally anything I want.”

    You recall in a recent interview how working as a DevOps engineer at Chef enabled you to spend more time with your family and to no longer worry about having to run to the data center to reboot servers and get the infrastructure rolling in the middle of celebrating winter holidays with your family. You even go as far as to admit that “Chef changed your life.” What do you think should be taken care of today?  What’s still a big issue in running cloud native apps that influences people’s lives and prevents them from getting an uninterrupted eight hours of sleep at night (both in terms of systems administrators and people running businesses on the cloud)?

    My story about Chef and how it changed my life is pretty straightforward. I was working in this shop that did some application work, and I used to wake up at three in the morning to do rollouts. Eventually, I got to the point where I was able to use configuration management to do the work (thanks to Chef). I actually created a CI pipeline by accident whereby I introduced the so-called Test-Kitchen to do the work for me. I actually went to my boss and said, “I don’t want to wake up at 3:00 in the morning on a Saturday.” I went through the whole process. I showed him the test-kitchen CI pipeline to walk us through, and I showed them that it all checked off. I didn’t take any machine down, did a rolling restart and said, “I’m so confident in this thing. I will quit if I take down production. I’m an honorable man — I’ll fix the thing, get it up and then I’ll write my resignation letter.” The boss said, “I’ll take that bet.” I did the typie-typie then I pushed it out and it worked perfectly fine. He looked at me and said, “I can’t believe that actually works.” Now I was able to do that, and it changed my life. I was able to now actually roll things out whenever I wanted to because I had a CI pipeline so I didn’t take down production. Fast forward six months later, and I was working for Chef.

    Having this in mind, are there still any tools that could really change people’s lives?

    In general, the idea of using container technology to have that flexibility and velocity, using CI/CD to make sure that you have confidence “just like that” is extremely important. To have that level of confidence to say that CI/CD has checked off and everything is green — that will change your life; especially if you’re coming from the VM world and you don’t use cloud native stuff. When you start walking through and building pipelines to do the work, CI/CD is invaluable.

    You spoke earlier this year at Configuration Management Camp 2019 about Istio service mesh. This project is often the next thing people try to approach after accepting Kubernetes with its complexity. What would be your advice when it comes to, “When should I think about using Istio on my K8s cluster?” What are the most common issues that Istio can help resolve?

    Kubernetes is just a scheduler. Well, it’s extremely complex, but not if you boil it down to exactly what it does all day every day. You tell Kubernetes, “I need this container running,” and you create this manifest with a bunch of variables around it. That’s all it really does. There’s a whole ecosystem around Kubernetes that needs to be created.S

    Istio is specifically about networking — that’s what it boils down to. The moment that it clicked for me, the story I tell people about why I should give a damn it is basically the moment that I saw Istio with the Envoy sidecar. You can write intelligent routing to read HTTP headers and change the direction of your container. Instead of you just striking a normal three-tier app, you have the Web frontend, your application server, and your database. Obviously, your database is somewhere not on Kubernetes, but that’s a different topic and a hotly debated one. You have your Web frontend and your application server, let’s say you want to release a new iPhone app, and you can use Istio to look at the iPhone http-header from the user agent and send it specifically to a beta version of a container over in a corner, and then actually go to your production data. This is how you can actually have real production data in a secure manner. You can actually have any arbitrary headers on top, so if you have a QA team or a design team, you can tell them, “Hey, use this hash when you hit my production data and you’re running into a small section of something we want to release tomorrow.”

    Kubernetes does all the work to make sure everything’s there, and back to the CI/CD story — you can actually put headers in to have CI/CD push specifically to the specific namespace, have Istio route to it, and then you can have a safe way to make sure the public doesn’t use it, or just a specific group of people use it.

    So it’s about complex traffic management?

    That’s only a portion of it. That’s the moment that it clicked first for me. There’s a whole world to it. You can make sure specific containers only talk over secure channels between one another. There’s free telemetry, so as soon as you install Istio you actually get a bunch of different plugins immediately, where you can show your boss a graph of what’s actually happening between your microservices, and there’s even a Vistio diagram that can come out where you can actually see all the different connections happening almost in real time.

    The complexity doesn’t ever stop.

    I mean, that’s the beauty of it, right? You see the complexity in front of you. Over time as you get more and more microservices to your point, it gets really really complex. You can actually have that up on a screen where you see the requests per second going through to different microservices, and figure out what’s actually happening.

    In your article “Building and Leading an Open Source Community,” you underline how difficult it is to build an active community around a project, especially when it has to “make the corporate overlords happy.” You also mention that “80 percent of Open Source projects” never get to a point where they are able to say that they grew something organically that’s valuable to both the community and the companies backing up the project.So, how has the explosion of Kubernetes served as the exception to this rule?

    I’ll start with OpenStack. It’s a really awesome Open Source project where the idea was, “We’re going to build a free and open cloud.” I think it’s been around for six and seven years, now and everyone had a voice. It was amazing. It was completely democratized, and anyone can work on it. It was open. It was great.

    Over time, what happened is that humans are human — mistakes are made, things happen, personalities change, and companies come in and make problems happen. I genuinely believe one reason why Kubernetes is so successful is: they took a lot of lessons learned on how OpenStack was run, and then took those lessons learned and created another foundation for a cloud native story. It’s also because it’s mainly supported by a handful of companies in the CNCF that can actually say “yes” or “no” to things. So, there is an actual trust circle to make sure that not all the random stuff happens.

    I genuinely believe that one of the main reasons why Kubernetes is now so successful is because there is a “level of oversight” that allows for not-really-bonkers things to come in out of the blue, but an actual project that comes out that is extremely successful.

    At FOSDEM, for example, I’ve heard people talking about, “years ago it was all about OpenStack, and now it’s all about Kubernetes.”

    These are exactly the same people. If you actually go around, you’ll see the exact same people who moved from the OpenStack community into Kubernetes. Hell, I’m even one of them.

    Article originally published on The New Stack.

    Sign up for free CI/CD with Kubernetes ebook

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Avatar
    Writen by: