Tim Hockin, one of the originators of Kubernetes, is an artist turned engineer. We spoke with Hockin about the state of Kubernetes in 2018, the past and the future of the project and the buzz around it.
You describe yourself as “a systems software engineer [who writes] software most people don’t see” on your homepage. That’s hard to believe when looking at Kubernetes success and the exposure it gets. What did go wrong? How come so many people see “your project?” What’s the key to its success?
I think the key to its success is that it solves real problems that real people have. There’s a lot of projects that try to change the way people work or try to change the way people think about things. Kubernetes came directly from a real experience at Google where we built Borg. We’ve been using it for 14 years and it worked very well. I suddenly realized that “one day I’ll leave Google and when I do, what will I do? How will I exist without all these tools?”
The reason that Kubernetes is successful is because people look at it and they don’t understand why they need it until they see it do stuff. Then they say “Oh my God, I need that!”I can’t say how many talks and presentations I’ve done in front of skeptical audiences where they don’t understand what it’s for. Just by showing short and simple features like “let’s do a rolling update” I watch what happens. I stop it halfway through and then roll it back and they’re like ”that’s what I do by hand. It takes me a ton more energy than what you just did.” So I think it captures things that people really wanted.
I would also add that in the bigger picture, Kubernetes is really not seen by a lot of people. It is not and will never be as visible as Android, or Chrome, or Google Maps. It is still very much “software that most people don’t see.”
You mentioned Borg and I was wondering what’s the correlation between Borg and Kubernetes. I doubt that it’s 1 to 1.
It’s derived from the same ideas and the same patterns, but Borg is millions of lines of code that are very Google-specific. It has thousands of features that nobody would use except Google.
There was a movement called “Google infrastructure for everyone” way before Kubernetes got so popular. Do you think that there are still some tools you use inside of Google infrastructure that can be open sourced and added to the stack of modern DevOps tools?
I hope so. We have made a ton of investment in tools increasing developer productivity, compiler tools, distributed builds and automatic testing. There’s been some blog posts and some papers written about how Forge works which is our distributed build system. We have little silly things that you might not think about, like test logs collection. Every time I run a test through our test system the logs are captured and stored somewhere. Then I go back, look at all them and realize that they are giant.
We’ve got some really great code review tools that are integrated with static analysis. Googlers can add plugins that say “I want to do an analysis of this sort of thing on my code base”. You put these things in and when somebody sends you a code change it will run your analyzer against their change. Neat stuff.
We’ve got these live debugging tools which will capture traces across test runs. If a test fails you can actually go back and single step through the test and figure out what happened and why. All this happens automatically. It’s not like you had to say “all my tests fail. Let me go run it in the debugger.” You just captured it automatically because the cost of saving the data is just so much less.
So this is one area where I think Google infrastructure can continue to help influence the world. We’ve got a lot of other things — internal storage systems and other things that are sort of trickling out a little. Bigtable and Spanner are out as Google Cloud products. These are fantastic tools and they have really transformed the way we do things internally. I think we’re getting better at putting out more of what we do.
You mentioned that Kubernetes is actually helping people in what they do but it’s no mystery that it has a steep learning curve. It’s complicated, spans across a variety of topics in the software development and maintenance niche. In one of the interviews, you mentioned that the initial mission behind the project was to “provide the hub of ecosystems, plural.” K8s was initiated to minimize human effort in running and scaling but, in the meantime, the project itself adds layers of complexity to itself. Could you elaborate a bit on that? Is there an end to the new ecosystems it brings to life?
I hope that each of these “follow-on-ecosystems” is optional and that nobody should be forced to adopt if it doesn’t actually bring any benefits. Istio is just a great example – Kubernetes is an application management platform and it does networking out of necessity because apps are networked. But it doesn’t make sense for Kubernetes to try to be the ultimate network abstraction. It can’t be the perfect service discovery because there is no such thing as perfect. There are a hundred service discovery applications out there. It can’t be the perfect load balancer and it can’t be the perfect traffic router, etc.
These things are complicated services and we’ve been trying to keep Kubernetes relatively simple at its core. Early on in the process we created this document “What is Kubernetes.” We defined there what we thought it was, what our swim lanes are, what we’re doing, and what we’re not going to do. We’ve defined how big the Kubernetes box was and how everything else was outside of it.
Networking is in the box but only a little bit. We need to keep just enough in order to keep the stuff working. Then you look at what people really need out of the box. For things like Layer 7 HTTP load balancer, go read the spec for Nginx or Envoy or HAProxy. They have hundreds and hundreds of features that are specific to their implementations and they’re all different from each other.
There’s no way that Kubernetes could implement all of those things — we would be in a race forever. That’s not where we want to be and we don’t want to be competing with these companies and those projects. We don’t want to provide an abstraction over these things because abstractions loose information, details, access to the underlying systems. We decided that our swim lane is the minimum that we can get away with that helps basic users do basic things. The only thing we’ve got in is the Ingress API. It’s a really simple HTTP API, simple to the point of being not satisfying. Almost everybody who uses Ingress ends up doing something with their specific implementation of Ingress because they need the extra features.
Here Istio comes along and says ”We’re going to tackle the problem of networking!” They are not a container management system. They work with container management systems. They work with Kubernetes. They are a networking app and like Kubernetes has 100 features for container management, Istio has 100 features for traffic management. You don’t have to use Istio if you don’t want to, if it doesn’t bring you any value. But for a lot of people it does bring incremental value. You can start with a simple API and move into a more detailed, robust and advanced API. It brings complexity which comes with the value. Complexity needs to justify itself because complexity for its own sake is cancer.
I’m a little bit sad how complex Kubernetes is right now. In some sense Kubernetes is not for end users, it’s for people who set up clusters and clusters are for end users. At Google, we’ve always separated the roles that were involved in cluster management into two very specific ones: the cluster operators and the application operators.
The cluster operators know everything about Borg and they know a little bit about each application that runs on Borg in a profile they understand, but not the details. They’re able to keep the Borg clusters up and running and they have SLAs around that and so forth. The application teams (like Google search or Gmail) come in and they say “I know how to use Borg but I don’t know how Borg runs and I don’t need to.” They come in and they run their applications on top of Borg and that split gives people a really nice ability to focus. Kubernetes is really aimed at those cluster operators.
The truth is that a lot of companies and organizations today are set up with those two roles in one. It’s tough getting resources out there right, so they get this complexity. It’s unfortunate that networking and security are hard, and they are a big source of complexity for Kubernetes. These are the hardest parts of setting up in Kubernetes because they’re still the hardest parts of our industry in general.
As one of the initiators, you have a big picture of where Kubernetes is heading and what are the biggest issues ahead. Could you give us a “Founding Father summary” on the state of Kubernetes in 2018?
So 2018 being almost over I think we’ve actually done a decent job at responding to some of the complaints about it being hard to set up certain things in Kubernetes. It has gone a long way towards making cluster setup easier (not easy but easier). There’s still more that we can do there.
I think Kubernetes is starting to get to a place where it’s a little bit more stable. We’re not seeing as many major features come in anymore. The development of this system has moved more and more to plugins and things that are outside of the core. You can see this even if you look at the graphs of number of commits against the GitHub kubernetes/kubernetes repo that’s trending down. It’s a good thing because more and more projects are moving out of the core repo. Overall, that’s good.
Meanwhile, the community has grown, Slack has grown, the number of engagements and customers have grown enormously. These are pretty healthy signals to me.
We still have too many bugs and too many PRs open. The developer experience process is still being worked on. The code review process and the design process are still being iterated on. This year was really successful. Looking back at the beginning of a year, version 1.8 feels very primitive. We’re closing out more and more of our wishlist and getting further down the list of important features.
Do you think that the ideal state (if there is such a thing in the software world) is that the project remains stable? i.e.no big changes occur, it just works and the only thing remained to do is to add plugins to it?
Yes, that would be wonderful. It’s software so nothing is really timeless. We’re constantly updating for things like security and there’ll always be a trickle of new features and things coming in. But the rate of change of things (especially things that have user facing impacts like API changes or functionality changes) is definitely slowing down and they have to, for the sake of the project.
More and more things have to be in the follow-up ecosystems. I don’t want to do a ton of new HTTP oriented work. I want that to land in other systems like Istio and other service meshes like Conduit. They’re more experts in networking than I am. We’re putting various extension mechanisms together which allow people to do more and more things that are not second class.
Kubernetes mission is to ease off developers and DevOps professionals’ pains in scaling and maintaining software — it’s all about reliability and uptime. Continuous Integration and Deployment are also touching the same topics but on a level closer to source code and testing. Is there a common space between K8s and CI/CD that needs to be addressed in the future?
So this is one of those places where I don’t think Kubernetes itself should go. I think CI/CD is the most important thing that people want to use a system like Kubernetes for, but I don’t think that Kubernetes should be trying to be a CI/CD system. There are commercial offerings, there are Open Source offerings, there are cloud centric offerings and I think those experts should be focusing on how to do CI/CD.
The interesting question is more what can Kubernetes do to make it easier to consume from CI/CD. Whether that’s more triggers, more push-based solutions and the GitOps workflow process – those sorts of questions are the interesting ones. If anybody submits a patch to add CI/CD to Kubernetes API, I will do my best to shut it down. I don’t think that makes sense. But certainly I want Kubernetes be useful for that. The number one thing we hear from people is “this looks really cool, can I run my build against it?”. We can do a better job at making that easy and possible. I think we’ve done a lot of evolution there. A good example is the ability to do Docker builds without Docker. This sort of thing is important.
In one of the interviews you’ve said that “people don’t wanna bet on a new horse” and I guess this was the way with Kubernetes in its early days. A lot of effort has been put to persuade decision-makers and senior devs to use the project and make it one of the busiest communities on GitHub. What do you think really contributed to making people adopt K8s despite its complexity and singularity?
Kubernetes got the most attention from us delivering real demonstrations that showed people how to do the old things. We tried not to be academic. We tried not to be theoretical about what to do and think. For example, we were showing how to make a rolling update of an application and rolling it back once there’s something bad with it during the update. This is something that DevOps people do every day and it’s not always automated. With Kubernetes you can make it easier. You can have the automation around a consistent API and consistent tooling, so that you can do this more often with more confidence. This is exactly the issue we’ve heard about from a lot of people and that’s why they now talk about Kubernetes adoption. They say “I went from one push a day to 20 pushes” and I think those are the things that got the most attention.
The first KubeCon was less than a thousand people and now we’re looking at sixty five hundred. But it was important that we would always put in the demos. A lot of my presentations early on were just demos like “Let me show you 10 things that I can do with Kubernetes”. Those are the things that captured people’s attention. I remember I did one at USENIX LISA for system administrators and after the talk people came up and were like ”This is my job — you just automated my job.” I feel a little bit bad about that, but at the same time I hope this frees them up to do other things and jobs.
What Kubernetes does is self-evident. It has allowed people to take code and deploy it quickly and whether you’re a CEO or a developer you see great value in this.
Kubernetes is now a bleeding edge technology keyword. It dominates the so-called “DevOps industry.” The project is in fact so cool that when I saw the Non-Code Contributor’s Guide I was excited that I might be of some help (even before properly educating myself on the topic). I even saw an article that made some resonance in the community entitled “You might not need Kubernetes.” Is the hype around the project any good or does it create too much noise and shifts the original focus of the project?
There’s a lot of noise and anytime there’s a successful thing and money’s involved there’s a lot of people who are trying to shift the message back to themselves, their product or how to make money on this thing. One of the things that we were instructed when we were reviewing talks for a KubeCon was to make sure that the talks are not vendor pitches. We don’t want people to pitch their product. One of the main reasons that I end up rejecting proposals was that “this is just a pitch, a company is showing me their product” and that’s just not what people want to see at a conference like this. I don’t think the noise itself is inherently bad. The more awareness we have the better off the project will be.
Obviously, I care a lot about the success and the longevity of Kubernetes. I see that when people come in now, they come with higher expectations. As the bar gets higher and people start to accept that you can do certain things automatically, they’re looking for what’s next. Now that Kubernetes is out there, there’s also a half dozen other systems that can do things in the same way. They make different tradeoffs, of course. People come in with different levels of expectations but I don’t think that diminishes the value of the community.
The non-code contributors guide was there because we have a lot of people who are working at companies using Kubernetes or who are interested in the technology and say “I’m not really a developer but this is interesting and compelling stuff. How can I help you?” There are hundred things that non-coders can do to contribute.
I saw that you and your colleagues will be delivering a keynote on the upcoming KubeCon in Seattle entitled “Stories About Kubernetes Beginnings That You’ve Never Heard.” Can you share one fun fact for this interview, which will encourage the readers to come to Seattle in December?
I don’t want to give away too much of what we’ll be talking about, but there’s been a ton of things that have happened especially in the early days that were surprising and took the project in different directions than I expected.
One of the earliest customer meetings we had was with a bunch of representatives from different companies like Yahoo, Box and eBay and others – they came to talk to us. They wanted to see what this Kubernetes thing was all about. They came to our Google offices and we gave them a demo and showed them what we had. I thought that everybody was very skeptical. A couple of people, in particular, had really really hard questions for us. They clearly had done their homework. We didn’t have great answers for all of them because it was still very early.
After the round of questions it was Sam Ghods from Box (one of the founders and a systems guy) said “I like it. I’m in.” For me that was a big validation. Here’s a guy who built a successful company doing hard systems problems, who looked at what we were doing and came on board. Sam rewrote our command line tool and had really meaningful impact on the project. We never expected that, but this was the sort of thing that made the project successful.
We have tons of these really interesting and unique individual contributor stories that are really inspiring. Hopefully we’ll be able to touch on a few of those during our keynote at KubeCon.
I’m a careful reader and I noticed on your homepage that you’re a fine arts minor! Are you an after-hour artist and paint large-scale steering wheels?
I got my first computer when I graduated from high school. Before that I was a painter and all through high school I studied painting and sculpture. I went to university to become a painter and my parents tried to talk me out of it. My father was an engineer, my brother is a mechanical-electrical engineer and my mom kept telling me “why don’t you go be an engineer and you can paint on the side”. I said “No, I want to be an artist and a teacher (I had a really influential art teacher in high school).” So I was majoring in Fine Arts and did a double major in Fine Arts and Education. Then I got a computer as a graduation gift. I started to play with it and realized how much fun it was. One of my friends took a C programming class, so I took it as well and it was easy. It just spoke to me. I understood it all. I realized that I actually really enjoy doing this and I wanted to do more of this. I took more classes and eventually changed majors from art to computer science.
I don’t paint much after work as it requires more time and space commitment which is hard to meet. Sculpture even more so. But I do draw a lot and I’m known around our office now for doodling on sticky notes during meetings especially. It’s like my brain’s screensaver. There’s even a running joke around that if Tim is drawing, that means he is paying attention.
Doodling is not exactly the same as painting, but it’s a creative outlet. The truth is that writing code satisfies the same urge that painting does — the need for creative expression. I don’t write as much code today as I wrote a couple years ago but when I do, I feel that same sense of pride from doing something creative on my own.
Looking to deliver your next project with Kubernetes?
Article originally published on The New Stack.