This episode features John Clingan, product manager at Red Hat, founder of MicroProfile, and co-author of “Kubernetes Native Microservices with Quarkus, and MicroProfile“. We’ll talk about the challenges of developing microservices on Java, using Quarkus and MicroProfile to speed up start times and reduce memory footprints, and how to transition into a Kubernetes-native experience.
- How to bring Java applications into a cloud-native environment.
- What are Quarkus and MicroProfile. And how it helps developers deploy on serverless and Kubernetes.
- Speeding up development cycle on Java applications.
- Making Java the preferred language for the cloud.
Listen to our entire conversation above, and check out my favorite parts in the episode highlights!
Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️ review on the podcast player of your choice and share it with your friends.
Darko (00:02): Hello, and welcome to Semaphore Uncut, a podcast for developers about building great products. Today I’m excited to welcome John Clingan. John, thank you so much for joining us. Please just go ahead and introduce yourself.
John: I’m a product manager at Red Hat and I am kind of a community product manager of Quarkus, which I think we’re going to talk about, but also I am heavily involved in the MicroProfile community. Most recently I am co-author of a book along with Ken Finnegan, who unfortunately couldn’t make it today, on a book on developing microservices with Quarkus and MicroProfile for Kubernetes.
Use the special code podsemaphore19 to get 40% discount for the Kubernetes Native Microservices with Quarkus, and MicroProfile book or any other product from Manning Publications.
What are Quarkus and MicroProfile?
Darko (00:45): Can you please give us a background about Quarkus and Microprofile?
John: I’m going to begin a little bit differently in that I suspect some of your listeners have an understanding of what Java EE is. Historically speaking, they may have heard of it, maybe have even developed with it. But basically, Java EE is a collection of specifications. And those specifications used to be done in something called the JCP, the Java Community Process. Now it’s moved into the Eclipse Foundation and was renamed Jakarta EE. So, in your minds, just think of Jakarta EE as the way Java EE is, going to move forward.
MicroProfile is about creating Java specifications for developing microservices. Think of things like configuration. There’s a modern REST client API invoking RESTful services. And there are also some interesting ones that tie into the Kubernetes aspect, like exposing health checks, the health of your application to the underlying platform, and distributed tracing as well.
On the Quarkus side, your listeners will probably understand where I’m coming from here. Java basically was getting a bit on the heavy side for doing cloud development. The Java virtual machine isn’t necessarily the problem, but it’s just the entire stack that developers are writing to.
Deploying an application to container can take a gigabyte of memory for a typical application. At Red Hat, our customers doing Java development were actually considering alternatives to Java because it was rather large for containers and Kubernetes. They weren’t getting the return on their investment in their clusters because they couldn’t get the density of applications that they were looking for.
And on top of that, even Red Hat’s middleware products were all written using Java. And those were also kind of getting rather large for Kubernetes. We kind of went off and did a little soul searching in engineering innovation, I guess, and came up with Quarkus, which is a runtime that is very efficient and productive.
Running Java on containers
Darko (04:58): Can you compare by contrast what Quarkus brought to the game in terms of improving those numbers of the memory microservices requirements or performance?
John: If you go to Quarkus.io, you can visualize what I’m talking about right now, there’s a graphic there. It shows the performance results, at least in terms of memory footprint and startup time, because in Kubernetes, when you’re spinning up containers, quite often startup time does matter. If you just think in that vein, just doing a Hello World app is 136 megabytes of memory. That’s the total footprint of memory in a container.
If you run it with Quarkus and the JVM, it’s half that: 73 megs. One of the neat things that Quarkus can do is actually take your Java applications and compile it to a native binary. It’s a Linux binary, but it can also compile down to Mac and Windows using Oracle’s GraalVM. So, with that, you can actually get that down to a 12 megabyte memory footprint. For Java that’s actually pretty impressive.
We measure transactions per second, per megabyte of RAM. The gating factor tends to be memory because as soon as you go above a certain amount of RAM, then you have to go to maybe a larger VM image. If you look at the startup time, from starting the application to actually the first response back to the browser or back to the invoker.
Now, in this example take a REST endpoint to a CRUD application. If you take the traditional cloud-native Java stack, startup time is nine and a half seconds. If you’d run on top of Quarkus again, just running on top of the JDK, it’s two seconds, in terms of response time to actually receive data. And if you compile it down to a native binary, it’s 42 milliseconds. From hitting enter to actually getting a response back from the browser.
This is important in Kubernetes because you’re always starting containers. It’s important for serverless where you’re scaling from zero. And if you take the combination of these two things, it actually makes Java a very nice runtime for doing functions as a service as well. That’s an area where Java couldn’t even play before using all the traditional APIs that Java developers are used to. If you just go into Quarkus Lab validation study, there’s actually a study that puts dollars behind this. Like how much money you could save by using this versus a traditional cloud-native stack.
Getting started on Quarkus
Darko (14:04): Making a step forward and embracing Quarkus, and also maybe and then we can talk about that in the next step, bringing it to Kubernetes. What’s the path, is that something which you see in practice that people easily embrace, or…
John: If you come from the Java EE and Jakarta worlds, Quarkus is going to feel at home. But if you are a Spring Boot developer, then there is also Spring compatibility, APIs. So you could use a Spring MVC, Spring Data JPA, Spring Dependency Injection. If you’re familiar with these API, you can actually use them with Quarkus and we implemented it in a Quarkus native way. So all of those things actually compiled down to a native binary and you get all those benefits I mentioned earlier, including a feature called live coding. And if you’re a Node JS developer or you’re used to running dynamic languages, coding is very productive. Where you can iterate very rapidly because the run-time starts so fast, or automatically reloads code changes.
Quarkus does exactly the same thing. You can make any change that you want to your Java code, to static files, to your configuration, to even if you’re familiar with Maven the POM file or Gradle, it’ll actually reload the application. It’s pretty much instantaneous. And so you get this very dynamic, highly iterative experience where Java developers basically in the past would say, “I wouldn’t try things out because it would just take too long.” Now, they’re basically saying, “Well, now I can try out things that I wouldn’t normally try out while I’m coding and not have to pay this large penalty.” It makes it possible.
Making Java feel like a dynamic language
Darko (16:41): That was previously reserved through those dynamic mostly scripting languages. And that was one of the very attractive things that they had to offer. It’s interesting to hear that it is now possible also in the Java world.
John: Yeah. And there’s no IDE tooling involved, so there’s no special plugins or anything like that. It’s just a Maven or Gradle command line, you put it in developer mode and it works in any IDE or even outside it. You can just go into a text editor and make changes, and it’ll just dynamically reload them.
Kind of tying in the Kubernetes part of it, I’m going to jump ahead a little in that, that live coding that I mentioned can actually be done remotely. You can have your application running in a container in a Kubernetes cluster and get the same dynamic live coding experience. It’ll dynamically sync all the changes to the backend whenever you make a change. It’s pretty neat that you get remote development as well.
Darko (17:46): So, it’s not going to reboot the app, it’s going to have an impact on the live process, right? By process, I mean, operating system process.
John: Yeah. It doesn’t restart the JVM at all, which is why it’s actually so fast. We compile just the files that change and then just boots it all up. It takes like half a second or so. It’s actually pretty quick.
Running Java on Kubernetes
John: Now to answer your other question about adopting Kubernetes and how to make that transition from traditional Java application server to Kubernetes: if you’re familiar with Spring, then picking up is super quick. If you’re familiar with the programming model, it’s also very quick. You’ll pick up Quarkus very quickly and get it running. In fact, deploying it to Kubernetes as a single command line with Maven. It’s a one-step deployment while you’re in developer integration mode. I understand deployment pipelines are going to be different, but we try to make it as easy as possible to run on Kubernetes.
Instrumenting Java apps on the cloud
Darko (19:33): I remember like roughly three years ago, when we were transitioning to Kubernetes, one of the major challenges was how do we figure out monitoring. There is a lot of networking inside, they’re in a bit different way. A lot of things are dynamic going up and down, which we were not used to. So was the state of that, if someone is creating a new app, what is possible to get out of the box without going out and figure it on its own? Also, a question to what extent the book answers those questions?
John: There’s a few things involved. Ken has actually written the distribute tracing chapter. I think that might already be out there in the early access program that lets you do distributed tracing via Jeager, for instance. You can have Jaeger monitor the request between microservices and get kind of a bigger performance picture. On the metric side, Quarkus implements MicroProfile metrics.
Then we have vendor-specific metrics. So, as you add more features to Quarkus you get a broader set of metrics that you can monitor. With Quarkus it’s not like an application server we’re out of the box. It’s every feature could possibly imagine. Basically, you add dependencies for the features that you need. And so with Quarkus as you add these dependencies, those dependencies are instrumented.
You can write your own business metrics within the application and expose those as well. All of those metrics are exposed either via JSON format or via basically an OpenMetrics/Prometheus format.
The biggest challenges in Kubernetes development
Darko (22:17): Exactly. Yeah. When developers are first interacting with Kubernetes, what are you seeing as the biggest challenges for people to understand when they’re trying to wrap their head around Kubernetes and the way it’s, as you said, scales and monitors services.
John: Generally speaking, and I think you touched on it, it’s the dynamic nature of Kubernetes with pods going up and down all the time. Especially if you really begin to adopt a flow where, you might have multiple features, blue/green deployments, or feature flags, and you really want to try things out. You could be doing many deploys per day. It sounds like from what you described, your listeners are kind of along that vein.
So you don’t have a stable view of the world. Things are dynamically changing all the time. Well, you have an immutable container, which is any changes that you make to a container, like if you write to the file system, it’s going to be gone as soon as that container is rebooted. So, that’s probably, in my opinion, the main thing, Kubernetes adds constructs that lets you deal with those things.
Services have a consistent name, consistent IP address to the service. So you have consistency provided by Kubernetes in a very dynamic environment. You have the ability to mount volumes for persistent data. But by and large, you want to make your containers immutable, meaning just expect it all to go away. You don’t want to treat each container as an individual thing that you have to manage. Either you make a change, you redeploy the application, even if it’s a configuration change, like turning out logging or turning off logging. That’s kind of how Red Hat looks at it. So immutable containers, that’s a change that developers might have to get used to.
Your monitoring of metrics becomes important, writing out your logs. Generally what you do is you either send your logs directly to, elastic search engine or something that you can then search, or you write them to STDOUT and let Kubernetes scrape it. And then you can basically analyze what’s going on when live, and figure out what’s wrong. Very rarely will you actually log into a container to debug things. Whereas maybe in the days of old you would log into a production server and look at things. So, that aspects is a little different.
Darko (25:13): Thank you for guiding us through the Quarkus. That’s a step forward in Java community to make it more accessible for these new, very dynamic environments. And of course, good luck with the book.
Have a comment? Join the discussion on the forum