28 Feb 2024 · Software Engineering

    Simplifying Kubernetes Development: Your Go-To Tools Guide

    13 min read
    Contents

    With the increasing adoption of Kubernetes for application development, the need for efficient local development tools has become paramount. In the past few years, tools for working with Kubernetes as a developer have improved. These tools help developers streamline workflows, accelerate iteration cycles, and create authentic development environments. This article will comprehensively analyze and compare six popular modern Kubernetes local development tools. By the end, you’ll have the information you need to make an informed choice and supercharge your Kubernetes development experience.

    Skaffold

    Skaffold is a powerful tool that automates the development workflow for Kubernetes applications. Skaffold provides a comprehensive solution catering to local development requirements and CI/CD workflows. It enables developers to iterate quickly by automating image builds, deployments and watching for changes in the source code. Skaffold supports multiple build tools and provides seamless integration with local Kubernetes clusters. Its declarative configuration and intuitive command-line interface make it popular among developers.

    The Skaffold configuration file, typically named skaffold.yaml, is a YAML file that defines how your application should be built, tested, and deployed. It acts as the central configuration hub for Skaffold, allowing you to specify various settings and options tailored to your specific project’s needs.

    apiVersion: skaffold/v2beta15
    kind: Config
    
    build:
      artifacts:
        - image: my-app
          context: ./app
          docker:
            dockerfile: Dockerfile
    
    deploy:
      kubectl:
        manifests:
          - k8s/deployment.yaml
          - k8s/service.yaml

    Benefits

    Skaffold features a modular architecture that allows you to select tools aligned with your specific CI/CD needs. It resembles a universal Swiss army knife and finds utility across various scenarios. With image management capabilities, it can automatically create image tags each time an image is built. Furthermore, Google’s Cloud Code plugin uses Skaffold to provide seamless local and remote Kubernetes development workflow with several popular IDEs. Notably, Skaffold delivers the advantage of maintaining distinct configurations for each environment through its profile feature.

    Limitations

    In my experience, using Skaffold, you may encounter difficulties running all instances locally when handling numerous resource-intensive microservices. Consequently, developers might resort to mocking certain services, resulting in deviations from the actual production behavior.

    Tilt

    Tilt is an open-source tool that enhances the Kubernetes developer experience. Tilt uses a Tiltfile for configuration, written in a Python dialect called Starlark. Tiltfile is a configuration file used by Tilt to define how your application should be built, deployed, and managed during development. You can explore the Tilt API reference here. Here’s an example of a Tiltfile:

    # Sample Tiltfile
    
    # Define the Docker build for the application
    docker_build(
        './app',
        dockerfile='Dockerfile',
        image='my-app'
    )
    
    # Define the Kubernetes deployment manifests
    k8s_yaml('k8s/deployment.yaml')
    k8s_yaml('k8s/service.yaml')

    When you run Tilt with this Tiltfile, it will build the Docker image based on the specified Dockerfile and deploy the application to the Kubernetes cluster using the provided Kubernetes manifests. Tilt will also watch for changes in the source code and automatically trigger rebuilds and redeployments, ensuring a smooth and efficient development experience.

    Benefits

    Unlike other Kubernetes development tools, Tilt goes beyond being a command-line tool. It also offers a user-friendly UI, enabling you to easily monitor each service’s health status, build progress, and runtime logs. Tilt also provides a web UI to visualize the status of running services. In my experience, the Tilt UI dashboard provides an excellent overview of the current status, which is beneficial when dealing with multiple systems. While Tilt excels in delivering a smooth developer experience, it may require additional setup for more complex deployments.

    Limitations

    Adopting Tilt may require additional learning, especially for developers unfamiliar with the Starlark Python dialect. Understanding and writing the Tiltfile might be challenging for those with no prior experience in this language. As Tilt uses Starlark as its configuration language, it might not offer the same flexibility and extensive language support as other tools that use widely adopted configuration formats like YAML.

    Telepresence

    Telepresence, developed by Ambassador Labs and recognized as a Cloud Native Computing Foundation sandbox project, aims to enhance the Kubernetes developer experience. With Telepresence, you can locally run a single service while seamlessly connecting it to a remote Kubernetes cluster. This process eliminates the need for continuous publishing and deployment of new artifacts in the cluster, unlike Skaffold, which relies on a local Kubernetes cluster. By running a placeholder pod for your application in the remote cluster, Telepresence routes incoming traffic to the container on your local workstation. It will instantly reflect any changes made to the application code by developers in the remote cluster without necessitating the deployment of a new container.

    To start debugging your application with Telepresence, you first need to connect your local development environment to the remote cluster using telepresence connect command.

    Then, you can start intercepting traffic with Telepresence using the telepresence intercept command. For example, you want to intercept a locally running service name order-service using the command telepresence intercept order-service --port 8080 command.

    Once the intercept is active, all traffic intended for the remote service will be routed to your locally running instance. You can use tools like curl to send requests to the remote service. Telepresence will route these requests to your local service.

    Benefits

    Telepresence proves its worth by facilitating remote development capabilities from your laptop, utilizing minimal resources. It negates the need for setting up and running a separate local Kubernetes cluster, such as minikube or Docker Desktop. Telepresence is particularly useful when working with distributed systems and microservices architectures. Telepresence simplifies the process and ensures your development environment closely mirrors production behavior.

    Limitations

    Telepresence relies on a remote Kubernetes cluster to proxy requests to and from the local development environment. If there are issues with the remote cluster’s availability or connectivity, it can disrupt the development workflow. In my experience, Telepresence may also need extra setup in environments with strict network or firewall restrictions.

    Okteto

    Okteto’s effectively removes the challenges associated with local development setups, the vast array of variations that can arise within a single engineering organization, and the subsequent problem-solving that often accompanies such complexities. Its key strengths lie in shifting the inner development loop to the cluster rather than automating it on the local workstation. By defining the development environment in an okteto.yaml YAML manifest file and utilizing commands like okteto init and okteto up, developers can quickly establish their development environment on the cluster.

    # Sample okteto.yaml
    
    # Define the development environment
    environment:
      name: my-dev-env
      namespace: my-namespace
      image: my-app:latest
    
    # Specify the services to sync with the remote cluster
    sync:
      - local_path: ./app
        remote_path: /app
    
    # Specify the services to forward ports for local access
    port_forwarding:
      - service: my-app
        local_port: 8080
        remote_port: 80

    okteto.yaml file is used by Okteto to define the development environment and how local files and ports should be synchronized and forwarded to the remote Kubernetes cluster.

    When you run okteto up with this okteto.yaml file, Okteto will create the specified development environment in the defined namespace and deploy the my-app Docker image to the remote cluster. It will also sync the local ./app directory with the /app directory on the cluster, ensuring any changes made locally are immediately reflected in the remote cluster. Additionally, the port forwarding specified in the file allows you to access the my-app service running in the cluster as if it were running locally on port 8080.

    The okteto.yaml file provides an easy way to configure your Okteto development environment and synchronize local development with the remote Kubernetes cluster. It offers a seamless development experience, allowing you to work with a remote cluster as a local development environment.

    Benefits

    Okteto is a good solution for effortlessly synchronizing files between local and remote Kubernetes clusters. Its single binary is fully compatible with various operating systems and boasts an exceptional remote terminal within the container development environment. With unparalleled hot code reload functionality for quicker iterations and bi-directional port forwarding for smooth communication between local and remote services, Okteto is an absolute must-have tool for all developers. It works seamlessly with local and remote Kubernetes clusters, Helm, and serverless functions, eliminating the need to build, push, or deploy during development. Furthermore, it efficiently and conveniently removes the necessity for specific runtime installations, making it the best choice for all developers.

    Limitations

    Okteto heavily relies on a remote Kubernetes cluster for development. If there are issues with the remote cluster’s availability or connectivity, it can disrupt the development workflow. Since Okteto moves the development loop to the cluster, it may consume additional resources, leading to increased costs and contention with other workloads on the cluster. In my experience, this might also affect the performance and responsiveness of the development environment.

    Docker Compose

    Although not designed explicitly for Kubernetes, Docker Compose is widely used for defining and running multi-container applications. It allows developers to define services, networks, and volumes in a declarative YAML file, making it easy to set up complex development environments. With the addition of Docker’s Kubernetes integration, we can use Compose files to deploy applications to a Kubernetes cluster. To use Docker Compose, Docker must be installed on the workstation. However, it’s important to note that while Docker Compose may give developers a sense of running their application in a Kubernetes environment like minikube, it fundamentally differs from an actual Kubernetes cluster. As a result, the behavior of an application that works smoothly on Docker Compose may not translate similarly when deployed to a Kubernetes production cluster.

    Here is an example of a Docker Compose file for a simple Java application:

    version: '3'
    
    services:
      app:
        build:
          context: .
          dockerfile: Dockerfile
        ports:
          - "8080:8080"
        volumes:
          - ./src/main/resources:/app/config

    services section defines the services that make up your application. In this case, we have a single service called app

    build specifies the build context and Dockerfile to build the image for the app service.

    context is the path to the directory containing the Dockerfile and application source code.

    dockerfile is the filename of the Dockerfile to use. ports “8080:8080” maps port 8080 on the host to port 8080 on the container, allowing you to access the Java application running in the container at http://localhost:8080.

    volumes creates a bind mount that mounts the src/main/resources directory on the host to /app/config inside the container, allowing changes to the configuration files on the host to be reflected in the container.

    To use this Docker Compose configuration, navigate to the directory containing the docker-compose.yml file and run the following command:

    docker-compose up

    Benefits

    With Docker Compose, you can outline your application’s services, configurations, and dependencies within a single file. From my experience, I’ve observed that Docker Compose adheres to the KISS (Keep It Simple, Stupid) design principle. It simplifies overseeing and deploying complex applications that consist of numerous containers. It’s well-suited for applications that run on a single host or machine, making it an excellent choice for development and testing environments. Docker Compose allows for fast iteration during development since you can quickly rebuild and redeploy containers. Learning Curve is generally less steep than Kubernetes, making it accessible to developers who are new to container orchestration.

    Limitations

    While containers effectively address the “works on my machine” problem, Docker Compose introduces a new challenge – “works on my Docker Compose setup.” Using Docker Compose as a replacement to streamline the inner development loop of cloud applications might be tempting. Still, as previously explained, discrepancies between the local and production environments can make debugging challenging.

    Garden

    Garden is a comprehensive local development environment for Kubernetes that aims to provide a consistent and reproducible experience across different development stages. It is a platform for cloud-native application development, testing, and deployment. It offers an opinionated approach to containerized development by leveraging Docker and Kubernetes. Garden integrates well with popular IDEs and provides features like hot reloading and seamless service discovery. It outlines your application’s build and deployment process through a configuration file known as garden.yml.

    To integrate Garden.io into your project, initiate the setup by executing the following command:

    garden init

    Afterward, you can configure your project by adding the following example to the ‘garden.yml’ file. This file defines services, tasks, tests, and more:

    services:
      web:
        build: .
        ports:
          - target: 3000
            published: 3000
            protocol: tcp

    Upon configuring Garden.io within your project, launch it by navigating to your project directory and running garden startcommand. Once you’ve set up and configured Garden.io and it’s up and running, the tool will initiate the project, generating containers for each service specified in the ‘garden.yml’ file. To incorporate testing into your Garden.io setup, add a ‘tests’ section to your garden.yml file.

    tests:
      my-tests:
        service: web
        command: mvn test

    Subsequently, running the following garden test command will execute the test suites. Finally, you can deploy to different cloud providers and Kubernetes clusters by adding a target environment section to your garden.yml file.

    target:
      name: my-kubernetes-cluster
      provider: kubernetes

    Here we indicate that the target environment is a Kubernetes cluster named my-kubernetes-cluster. Then, execute the garden deploy command to initiate deployment. Moreover, garden deploy will automate the deployment of applications to the specified Kubernetes-native development environment, handling tasks like image building, Kubernetes orchestration, and synchronization, and providing a seamless environment for development and testing.

    Benefits

    One of its key strengths lies in its ability to streamline the setup process for cloud-native development environments. By abstracting away the intricacies of creating Kubernetes configurations and other deployment-related tasks, Garden greatly simplifies the initial setup process, allowing developers to focus on coding and testing their applications rather than grappling with complex configurations. This ease of setup can significantly accelerate the onboarding process for both seasoned developers and newcomers to the Kubernetes ecosystem, enabling them to dive into productive work sooner.

    Limitations

    In my experience, I found that Garden’s setup is more intricate than other tools, demanding some time to become acquainted with its concepts, resulting in a steeper learning curve. Furthermore, for smaller applications, Garden’s complexities might be excessive. Notably, Garden operates as a commercial open-source, resulting in some of Garden’s features being subject to payment.

    Conclusion

    Choosing the right local development tool for Kubernetes can significantly impact your productivity and the quality of your applications. Each tool discussed in this article has strengths and weaknesses, catering to different use cases and preferences. Skaffold and Tilt excel in automation and iterative development, Telepresence and Okteto provide seamless interaction with remote clusters, Docker Compose offers a familiar experience with multi-container applications, and Garden provides a comprehensive development environment. When deciding, Please look at your specific requirements, development workflows, and the complexity of your Kubernetes deployments. Experimenting with different tools and finding the one that aligns best with your needs will enhance your Kubernetes development experience and help you build robust applications efficiently.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Avatar
    Writen by:
    I am a software engineer who loves Java, Spring Boot, DevOps, and the Cloud.
    Avatar
    Reviewed by:
    I picked up most of my soft/hardware troubleshooting skills in the US Army. A decade of Java development drove me to operations, scaling infrastructure to cope with the thundering herd. Engineering coach and CTO of Teleclinic.