3 Mar 2022 · Software Engineering

    On Building a Container Engine

    8 min read
    Contents

    In the Developer Interview series, we talk to engineers who use Semaphore. We pick their brains about how they work, what wisdom they would like to pass on, and the most challenging problems they’ve faced during development.

    This time around, we had the pleasure to talk to Alban Crequy, a member of the team working on the rkt container engine and the co-founder and CTO at Kinvolk GmbH.

    1. For a start, can you quickly introduce rkt?

    rkt is a pod-native container engine for Linux. From the start, it has been designed to be composable, secure, and built in line with the standards and existing software. It was announced by CoreOS on Dec. 1, 2014.

    To elaborate, a pod is a group of containers that are deployed together on the same host. This fits perfectly to the way Kubernetes deploys containers.

    rkt is composable in that it makes use of existing technologies – systemd, gpg, tar, http, etc. – and you can choose various isolation environments; container, VM (stage1-kvm), or none (stage1-fly).

    rkt was built with security in mind from the beginning: image signing and verification, secomp, TPM, etc. If you want to do anything that is insecure you have to explicitly use the –insecure-options flag. This is often useful for testing, but you want to avoid running with this flag in production.

    2. How do you manage a large influx of people working on rkt?

    The contributor count for rkt currently stands at 188. From that metric, we can say that the project has been pretty successful at getting community contributions.

    But I wouldn’t describe the contribution flow as a “large influx”. The number of contributors at any point in time fluctuates. And we can see from the open issue count that we don’t keep up as well as we’d like. This is a common problem in open source projects that rkt is not immune to.

    But we do try to ensure that issues get addressed, especially when new contributors come to the project. We try to help out as much as possible.

    Of course, one of the ways we evaluate if a new PR is reviewable is by looking to see if all the tests go green. That takes a lot of the burden of trying things off of the maintainers.

    3. With Docker being so widespread, a question which naturally comes up, how does rkt compare?

    Nowadays, we find more and more that the conversation is around containers, instead of Docker, rkt, or any specific container runtimes. The reason for this is that with the advent of container standards (OCI, AppC, etc.) and Kubernetes for container orchestration, the application developer should not care what container runtime is running underneath.

    However, the system architecture should take into account the design differences between the container runtimes. This is where we think rkt shines. The fact that one can swap out the isolation environment (container, VM, or host), and that once the application starts, there is no rkt code running, are compelling functionality that systems architects consider.

    From the rkt side, we hear that users are happy with rkt’s stability and being upgradable without stopping already-running containers, due to its daemonless architecture.

    Both projects are great, they just do things differently. Docker definitely has more resources behind it.

    One example of doing something with rkt that cannot be done with Docker is to have custom isolation environments (stage1). For example, we wrote a custom rkt stage1 https://github.com/kinvolk/stage1-builder that we use to test various projects. We use KVM rkt stage1 to run various versions of the Linux kernel to test new BPF features. This is how we test gobpf and tcptracer-bpf. We wrote a blog post about that. We use Semaphore to run such tests because Semaphore offers a true VM environment. For us, this is a great feature for both rkt and Semaphore.

    4. You’re moving from ACI (Application Container Image) to the OCI (Open Container Initiative) image standard. What is the most challenging aspect of the process? And what is the biggest gain for the end user?

    rkt supports App Container Images (ACI) as well as Docker images via the docker2aci tool that transparently converting images on the fly to ACI. Basically, any container image you can run with Docker should just work. Because OCI images should be runnable with Docker, they should also be runnable in rkt. If they don’t, that’s a bug that we fix.

    There was some work started to support running OCI images without the conversion step. This work seems to have been abandoned, so unless someone invests the resources to do that work, rkt will continue to use ACI images transparently underneath, regardless of the image you are running from.

    5. CoreOS’s Container Linux appears to be a great operating system for large container deployments with Kubernetes (and Tectonic). Have you seen instances of the Container Linux philosophy implemented in other operating systems?

    The idea to have a minimum base OS and run everything in distro-agnostic containers is also seen in other places. On the Linux desktop, there are attempts to emulate this a bit with Flatpak, which is basically a container for Desktop applications. For example, Endless OS is a Linux distribution that is trying to package user apps with Flatpak.

    Another good idea from CoreOS’s Container Linux is the A/B partition system. This is rather common in the embedded space too. And the idea to have a read-only /usr is progressing elsewhere too. It is getting easier with systemd’s new isolation directives such as ReadOnlyPaths.

    However, only a few of the rkt maintainers work at CoreOS, and I’m not one of them 🙂

    6. How do you handle issues and feature requests?

    We prefer feature requests to come in the form of PRs. For small things we look at those on a case-by-case basis, try to understand the use-case and if it fits into the focus of rkt.

    For issues, we follow a similar path: evaluate, communicate and iterate. Sometimes issues are obvious, other times things are more nuanced.

    To keep track of things, we make heavy use of labels on GitHub. We put special emphasis on the low-hanging-fruit label that new contributors can get started with.

    7. What tools and guidelines do you use to write tests?

    We use Go in most of our code base. We try to use the Go way of writing tests: the unit tests are written using https://golang.org/pkg/testing/.

    For functional tests in rkt, the Golang testing package was not sufficient, so we wrote some Golang code to exec and check the stdout/stderr of the rkt application with https://github.com/coreos/gexpect. Gexpect is able to interactively communicate with a spawned process via the usual stdios.

    8. What is the most useful Semaphore feature for your work?

    The ability to run the tests with full privileges as opposed to other platforms that restrict the environment with technologies like LXC, AppArmor, or containers, and being able to use KVM – often unavailable on other CI platforms that use AWS – is very useful for testing with the KVM rkt stage1.

    Semaphore’s SSH feature is very useful, too. Quite often, we had a test working on our development laptops, but not on Semaphore because of the slightly different environment, or a different Linux distro. The ability to SSH to the server to debug the test failure saved us from wasting a lot of time.

    And of course, the fact that we don’t have to maintain the CI infrastructure with security updates, etc. For a while, we were using both Semaphore for basic tests and a custom Jenkins installation hosted by CoreOS to spawn VMs with various distros (Ubuntu, Fedora, Debian…) on AWS. That’s a lot of work to configure and maintain compared to Semaphore, where the work is done for us. We will most likely need to have something to test on various distributions at some point. As far as I can see, there are no services offering this without the maintenance burden.

    9. Apart from yours, is there an open source tool or project you’re particularly excited about?

    I’m really excited about eBPF and Kubernetes and the possibility of using those together. It’s a less explored area that I think is fertile for a lot of cool new technologies.

    10. The future is now, and it’s in a container. What’s next? Where do you see the industry in 1 year?

    That’s very difficult to say. I think, despite the huge amount of excitement and work happening around containers, we’re only just starting down that road. There is a lot more to do in the space.

    As mentioned before, I am especially curious to see how Linux features (BPF, cgroup-v2 etc.) end up being used in higher-level use cases like Kubernetes. But surprises might come outside of Linux from microkernels or similar. All this technology remains exciting!

    Look up to rkt and get rid of the overhead of configuring and maintaining a CI server. Semaphore is free for open source and up to 100 private builds per month – set up CI for your project in a few clicks.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Aleksandra
    Writen by: