Continuous Integration (CI) is a software engineering practice in which we test our code on every update. The practice creates a strong feedback loop that reveals errors as soon as they are introduced. Consequently, we can spend more of our time coding features rather than hunting bugs.

Once we’re confident about the code, we can move on to Continuous Delivery (CD). Continuous Delivery goes through all the steps that make the release so each successful update will have a corresponding package—ready for deployment.

I believe that practice is the best school. That is why I’ve prepared a demo project that you can practice on. As you follow the steps, you’ll learn how CI/CD can help you code faster and better.

In part 1 of the tutorial, we’ll learn how to build and test a Docker image using Continuous Integration.

In part 2 we’ll introduce Kubernetes to the picture. We’ll extend CI/CD with Continuous Deployment to a Kubernetes cluster.

Part 1: Testing and preparing a Java app for Docker with CI/CD

Getting ready

Here’s a list of the things you’ll need to get started.

  • Your favorite Java development editor and SDK.
  • Docker Hub account and Docker.
  • GitHub account and Git.
  • Curl to test the application.

Let’s get everything ready. First, fork the repository with the demo and clone it to your machine. This is the project’s structure:

semaphore java spring boot project structure

The application is built in Java Spring Boot and it exposes some API endpoints. It uses an in-memory H2 database; that will do for testing and development, but once it goes to production we’ll need to switch to a different storage method. The project includes tests, benchmarks and everything needed to create the Docker image.

These are the commands that we can use to test and run the application:

$ mvn clean test
$ mvn clean test -Pintegration-testing
$ mvn spring-boot:run

The project ships with Apache Jmeter to generate benchmarks:

$ mvn clean jmeter:jmeter
$ mvn jmeter:gui

A Dockerfile to create a Docker image is also included:

$ docker build -t semaphore-demo-java-spring

CI/CD Workflow

In this section we’ll examine what’s inside the .semaphore directory. Here we’ll find the entire configuration for the CI/CD workflow.

We’ll use Semaphore as our Continuous Integration solution. Our CI/CD workflow will:

  1. Download Java dependencies.
  2. Build the application JAR.
  3. Run the tests and benchmark. And, if all goes well…
  4. Create a Docker image and push it to Docker Hub.

But first, open your browser at Semaphore and sign up with your GitHub account; that will link up both accounts. Then, download the sem CLI tool and log in to your machine:

$ sem connect ORGANIZATION.semaphoreci.com ACCESS_TOKEN

Your authentication can be found by clicking the terminal icon on the top right side of your Semaphore account.

Finally, add the project to Semaphore with sem init:

$ cd semaphore-demo-java-spring
$ sem init

Continuous Integration

Semaphore will always look for the initial pipeline file at .semaphore/semaphore.yml. A pipeline bundles all the configuration, environment and commands that Semaphore needs do its job. Let’s examine the CI pipeline.

Name and Agent

The pipeline starts with a name and an agent. The agent is the virtual machine type that powers the jobs. Semaphore offers several machine types, we’ll use the free-tier plan e1-standard-2 type with an Ubuntu 18.04.

version: v1.0
name: Java Spring example CI pipeline on Semaphore
agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804

Jobs and Blocks

Jobs define the commands that give life to the CI process. Jobs are grouped in blocks. Once all jobs in a block are done, the next block begins.

The first block downloads the dependencies and builds the application JAR without running any tests. The block uses checkout to clone the repository and cache to store and retrieve the Java dependencies:

blocks:
  - name: "Build"
    task:
      env_vars:
        - name: MAVEN_OPTS
          value: "-Dmaven.repo.local=.m2"

      jobs:
      - name: Build
        commands:
          - checkout
          - cache restore
          - mvn -q package jmeter:configure -Dmaven.test.skip=true
          - cache store

The second block has two jobs, one for the unit tests and one for the integration tests. Commands in the prologue run before each job, so it’s a good place to place common setup commands:

- name: "Test"
    task:
      env_vars:
        - name: MAVEN_OPTS
          value: "-Dmaven.repo.local=.m2"

      prologue:
        commands:
          - checkout
          - cache restore
          - cache restore
          - mvn -q test-compile -Dmaven.test.skip=true
      jobs:
      - name: Unit tests
        commands:
          - mvn test
      - name: Integration tests
        commands:
          - mvn test -Pintegration-testing

The third block runs the benchmarks:

 - name: "Performance tests"
    task:
      env_vars:
        - name: MAVEN_OPTS
          value: "-Dmaven.repo.local=.m2"
      prologue:
        commands:
          - checkout
          - cache restore
      jobs:
      - name: Benchmark
        commands:
          - java -version
          - java -jar target/spring-pipeline-demo.jar > /dev/null &
          - sleep 20
          - mvn -q jmeter:jmeter
          - mvn jmeter:results

Promotions

At this point, the CI is complete. If the code passed all the tests we can jump to the next stage: Continuous Delivery. We link up pipelines using promotions. The end of the CI pipeline includes a promotion to start the docker build if there were no errors:

promotions:
  - name: Dockerize
    pipeline_file: docker-build.yml
    auto_promote_on:
      - result: passed

Continuous Delivery

In this section, we’ll review how Semaphore builds the Docker image. Before you can run this pipeline though, you must tell Semaphore how to connect with Docker Hub.

To securely store passwords, Semaphore provides the secrets feature. Create a secret with your Docker Hub username and password. Semaphore will need them to push images into your repository:

$ sem create secret dockerhub \
   -e DOCKER_USERNAME='YOUR_USERNAME' \
   -e DOCKER_PASSWORD='YOUR_PASSWORD'

To view your secret details:

$ sem get secret dockerhub

Now that we have everything in place for the Continuous Delivery pipeline, let’s examine how it works. Open the .semaphore/docker-build.yml file. The pipeline is made of one block with a single job:

blocks:
  - name: "Build"
    task:
      env_vars:
        - name: MAVEN_OPTS
          value: "-Dmaven.repo.local=.m2"
        - name: ENVIRONMENT
          value: "dev"

      secrets:
      - name: dockerhub

      prologue:
        commands:
          - checkout
          - cache restore

      jobs:
      - name: Build and deploy docker container
        commands:
          - mvn -q package -Dmaven.test.skip=true
          - echo "$DOCKER_PASSWORD" | docker login  --username "$DOCKER_USERNAME" --password-stdin
          - docker pull "$DOCKER_USERNAME"/semaphore-demo-java-spring:latest || true
          - docker build --cache-from "$DOCKER_USERNAME"/semaphore-demo-java-spring:latest --build-arg ENVIRONMENT="${ENVIRONMENT}" -t "$DOCKER_USERNAME"/semaphore-demo-java-spring:latest .
          - docker push "$DOCKER_USERNAME"/semaphore-demo-java-spring:latest

Here,  the prologue pulls the dependencies the cache. Then the build job packages the JAR into a Docker image and pushes it to Docker Hub. The docker build command can run faster if there is an image in the cache, that is why the pipeline attempts to pull the latest image from the repository first.

Testing the Docker image

To start the workflow, make any modification to the code and push it to GitHub:

$ touch some_file
$ git add some_file
$ git commit -m "test workflow"
$ git push origin master

Go to your Semaphore account to see the pipeline working.

working pipeline for semaphore java spring demo
The workflow

After a few seconds the pipeline should be complete:

java spring example CI pipeline on Semaphore
The workflow in Semaphore

Workflow

By now, you should have a ready Docker image in your repository. Let’s give it a go. Pull the newly created image to your machine:

$ docker pull YOUR_DOCKER_USER/semaphore-demo-java-spring:latest

And start it in your machine:

$ docker run -it -p 8080:8080 YOUR_DOCKER_USERNAME/semaphore-demo-java-spring

You can create a user with a POST request:

$ curl -w "\n" -X POST \
    -d '{ "email": "wally@example.com", "password": "sekret" }' \
    -H "Content-type: application/json" localhost:8080/users


{"username":"wally@example.com"}

With the user created, you can authenticate and see the secure webpage:

$ curl -w "\n" --user wally@example.com:sekret localhost:8080/admin/home

<!DOCTYPE HTML>
<html>
<div class="container">
 <header>
   <h1>
     Welcome <span>tom@example.com</span>!
   </h1>
 </header>
</div>

You can also try it with the login page at localhost:8080/admin/home.

Part 2: Extending CI/CD with Continuous Deployment to a Kubernetes cluster

Adding a production profile to the application

You may recall from the first part of the tutorial that our application has a glaring flaw: the lack of data persistence—our precious data is lost across reboots. Fortunately, this is easily fixed by adding a new profile with a real database. For the purposes of this tutorial, I’ll choose a MySQL. You can follow me with MySQL on this section or choose any other backend from the Hibernate providers page.

First, edit the Maven manifest file (pom.xml) to add a production profile inside the <profiles> … </profiles> tags:

<profile>
   <id>production</id>
   <properties>
       <maven.test.skip>true</maven.test.skip>
   </properties>
</profile>

Then, between the <dependencies> … </dependencies> tags, add the MySQL driver as a dependency:

<dependency>
   <groupId>mysql</groupId>
   <artifactId>mysql-connector-java</artifactId>
   <scope>runtime</scope>
</dependency>

Finally create a production-only properties file at src/main/resources/application-production.properties:

spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL55Dialect
spring.datasource.url=jdbc:mysql://${DB_HOST:localhost}:${DB_PORT:3306}/${DB_NAME}
spring.datasource.username=${DB_USER}
spring.datasource.password=${DB_PASSWORD}

We must avoid putting secret information such as passwords in GitHub. We’ll use environment variables and decide later how we’ll pass them along.

Now our application is ready for prime time.

Preparing your Cloud

In this section, we’ll create the database and Kubernetes clusters. Log in to your favorite cloud provider and provision a MySQL database and a Kubernetes cluster.

Creating the database

Create a MySQL database with a relatively new version (i.e., 5.7 or 8.0+). You can install your own server or use a managed cloud database. For example, AWS has RDS and Aurora, and Google Cloud has Google SQL.

Once you have created the database service:

  1. Create a database called demodb.
  2. Create a user called demouser with, at least, SELECT, INSERT, UPDATE permissions.
  3. Take note of the database IP address and port.

Once that is set up, create the user tables:

CREATE TABLE `hibernate_sequence` (
 `next_val` bigint(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE `users` (
 `id` bigint(20) NOT NULL,
 `created_date` datetime DEFAULT NULL,
 `email` varchar(255) DEFAULT NULL,
 `modified_date` datetime DEFAULT NULL,
 `password` varchar(255) DEFAULT NULL,
 PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Creating the Kubernetes cluster

The most accessible way to get started with Kubernetes is through a managed cluster from a cloud provider (such as Elastic Kubernetes Service on AWS, Kubernetes on Google Cloud, etc). I’ll try to keep this tutorial vendor-agnostic so you, dear reader, have the freedom to choose whichever alternative best suits your needs.

In regards to the cluster node sizes, this is a microservice, so requirements are minimal. The most modest machine will suffice, and you can adjust the number of nodes to your budget. If you want to have rolling updates—that is, upgrades without downtime—you’ll need at least two nodes.

Working with Kubernetes

On paper, Kubernetes deployments are simple and tidy: you specify the desired final state and let the cluster manage itself. And it can be, once we can understand how Kubernetes thinks about:

  • Pods: a pod is a team of containers. Containers in a pod are guaranteed to run on the same machine.
  • Deployments: a deployment monitors pods and manages their allocation. We can use deployments to scale up or down the number of pods and perform rolling updates.
  • Services: services are entry points to our application. Service exposes a fixed public IP for our end users, they can do port mapping and load balancing.
  • Labels: labels are short key-value pairs we can add to any resource in the cluster. They are useful to organize and cross-reference objects in a deployment. We’ll use labels to connect the service with the pods.

Did you notice that I didn’t list containers as an item? While it is possible to start a single container in Kubernetes, it’s best if we think of them as tires on a car, they’re only useful as parts of the whole.

Let’s start by defining the service. Create a manifest file called deployment.yml with the following contents:

apiVersion: v1
kind: Service
metadata:
  name: semaphore-demo-java-spring-lb
spec:
  selector:
    app: semaphore-demo-java-spring-service
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 8080

Under the spec tree, we find the service definition: a network load balancer that forwards HTTP traffic to port 8080. Add the following contents to the same file, separated by three hyphens (—):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: semaphore-demo-java-spring-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: semaphore-demo-java-spring-service
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 0
  template:
    metadata:
      labels:
        app: semaphore-demo-java-spring-service
    spec:
      containers:
        - name: semaphore-demo-java-spring-service
          image: ${DOCKER_USERNAME}/semaphore-demo-java-spring:$SEMAPHORE_WORKFLOW_ID
          imagePullPolicy: Always
          env:
            - name: ENVIRONMENT
              value: "production"
            - name: DB_HOST
              value: "${DB_HOST}"
            - name: DB_PORT
              value: "${DB_PORT}"
            - name: DB_NAME
              value: "${DB_NAME}"
            - name: DB_USER
              value: "${DB_USER}"
            - name: DB_PASSWORD
              value: "${DB_PASSWORD}"
          readinessProbe:
            initialDelaySeconds: 60
            httpGet:
              path: /login
              port: 8080

The template.spec branch defines the containers that make up a pod. There’s only one container in our application, referenced by its image. Here we also pass along the environment variables.

The total number of pods is controlled with replicas. You should set it to the number of nodes in your cluster.

The update policy is defined in strategy. A rolling update refreshes the pods in turns, so there is always at least one pod working. The test used to check if the pod is ready is defined with readinessProbe.

Selector, labels and matchLabels work together to connect the service and deployment. Kubernetes looks for matching labels to combine resources.

You may have noticed that were are using special tags in the Docker image. In part one of the tutorial, we tagged all our Docker images as latest. The problem with the latest is that we lose the capacity to version images; old images get overwritten on each build. If we have difficulties with a release, there is no previous version to roll back to. So, instead of the latest, it’s best to use some variable like $SEMAPHORE_WORKFLOW_ID, which serves as a unique identifier.

Preparing for Continuous Deployment

In this section, we’ll configure Semaphore CI/CD for Kubernetes deployments.

Creating more secrets

In part one of the tutorial, you created a secret with your Docker Hub credentials. Here, you’ll need to repeat the procedure with two more pieces of information.

Database user: a secret that contains your database username, password, and other connection details.

$ sem create secret production-db-auth \
   -e DB_HOST=YOU_DATABASE_IP \
   -e DB_PORT=YOUR_DATABASE_PORT \
   -e DB_NAME=YOUR_DATABASE_NAME \
   -e DB_USER=YOUR_DATABASE_USERNAME \
   -e DB_PASSWORD=YOUR_DATABASE_PASSWORD

Kubernetes cluster: a secret with the Kubernetes connection parameters. The specific details will depend on how and where the cluster is running. For example, if a kubeconfig file was provided, you can upload it to Semaphore with the following command:

$ sem create secret production-k8s-auth \
     -f kubeconfig.yml:/home/semaphore/.kube/config

Creating the deployment pipeline

We’re almost done. The only thing left is to create a Deployment Pipeline to:

  • Generate manifest: populate the manifest with the real environment variables.
  • Make a deployment: send the desired final state to the Kubernetes cluster.

Depending on where and how the cluster is running, you may need to adapt the following code. If you only need a kubeconfig file to connect to your cluster, great, this should be enough. Some cloud providers, however, need additional helper programs.

For instance, AWS requires the aws-iam-authenticator when connecting with the cluster. You may need to install programs—you have full sudo privileges in Semaphore—or add more secrets. For more information, consult your cloud provider documentation.

Create the “Deploy to Kubernetes” pipeline at .semaphore/deploy-k8s.yml:

version: v1.0
name: Deploy to Kubernetes
agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804

blocks:
  - name: Deploy to Kubernetes
    task:
      secrets:
        - name: production-k8s-auth
        - name: production-db-auth
        - name: dockerhub

      prologue:
        commands:
          # - <<<put any installation/authentication commands here>>>
          # eg: gcloud, aws, doctl, az, etc

          - checkout

      jobs:
      - name: Deploy
        commands:
          - cat deployment.yml | envsubst | tee deployment.yml
          - kubectl apply -f deployment.yml

Since we are abandoning the latest tag, we need to make two updates to the “Docker Build” pipeline. Modification number one is using the same workflow id for the build command.

Open the pipeline located at .semaphore/docker-build.yml and replace the last two occurrences of “latest” with “$SEMAPHORE_WORKFLOW_ID”:

  - docker build --cache-from "$DOCKER_USERNAME"/semaphore-demo-java-spring:latest --build-arg ENVIRONMENT="${ENVIRONMENT}" -t "$DOCKER_USERNAME"/semaphore-demo-java-spring:$SEMAPHORE_WORKFLOW_ID .

  - docker push "$DOCKER_USERNAME"/semaphore-demo-java-spring:$SEMAPHORE_WORKFLOW_ID

Modification number two is connecting the “Docker Build” and “Deploy to Kubernetes” pipelines with a promotion. Add the following snippet a the end of .semaphore/docker-build.yml:

promotions:
  - name: Deploy to Kubernetes
    pipeline_file: deploy-k8s.yml
    auto_promote_on:
      - result: passed
        branch:
          - master

Your first deployment

At this point, you’re ready to do your first deployment. Push the updates to GitHub to start the process:

$ git add .semaphore deployment.yml
$ git commit -m "add Kubernetes deployment pipeline"
$ git push origin master

Allow a few minutes for the pipelines to do their work:

kubernetes
Deploy workflow

Once the workflow is complete, Kubernetes takes over. You can monitor the process from your cloud console or using kubectl:

$ kubectl get deployments
$ kubectl get pods

To retrieve the external service IP address, check your cloud dashboard page or use kubectl:

$ kubectl get services

That’s it. The service is running, and you now have a complete CI/CD process to deploy your application.

Wrapping up

Now You’ve learned how Semaphore and Docker can work together to automate Kubernetes deployments. Feel free to fork the demo project and adapt it to your needs. Kubernetes developers are in high demand and you just did your first Kubernetes deployment, way to go!

Some ideas to have fun with:

  1. Re-run the tests from part one but with the cluster IP instead of localhost.
  2. Make a change and push it to GitHub. You’ll see that Kubernetes does a rolling update, restarting one pod at a time until all of them are running the newer version.
  3. Try implementing a more complicated setup. Create additional services and run them as separate containers in the same pod.

Then, learn more about CI/CD for Docker and Kubernetes with our other step-by-step tutorials:

Part 1 and Part 2 of this tutorial were originally published on JAXenter.