20 Mar 2020 · Semaphore News

    Continuous Integration with Docker Compose

    8 min read
    Contents

    This is a guest post by Nigel Brown, an independent Docker specialist who writes, teaches, and consults on all things Docker-related. Based in the UK, he travels regularly, and can be found at windsock.io, and on GitHub.

    If you take heed of any of the many reports relating to cloud-native computing, then you’d be forgiven for thinking that every organization, large or small, is well on the way to a microservices-oriented application nirvana. The truth is, of course, that a good number of organizations have started the journey, but still maintain traditional monolithic applications, and have some way to go in the journey. Is it possible to take advantage of the new set of cloud-native technologies such as Docker, when you have applications of mixed architectural styles?

    I’ll not dwell on Docker, because unless you’ve been marooned on a desert island for the last few years, you won’t have been able to escape the relentless coverage of its perceived benefits. Instead, I wanted to discuss one of Docker’s ancillary tools; Docker Compose.

    Whether you’re developing a microservices-oriented software application, or looking for a better way to distribute a traditional, monolithic application, Docker Compose is a key tool in the developer’s arsenal. Formerly known as Fig, Docker Inc. acquired the software, when its creators joined Docker in July 2014, barely a month after the release of the first supported version of Docker. Since then, Fig has evolved into Docker Compose, has been considerably enhanced, and is widely used by Docker adopters in their development practices.

    Whilst the ‘development’ use case is well understood, its application in a continuous integration (CI) workflow, is, perhaps, less obvious. Your application may be composed of many parts, or it may simply rely on additional software components (e.g. an RDBMS); either way, Docker Compose has a big part to play in the orchestration of applications, and can provide significant benefits when used in conjunction with CI tooling.

    Before we see how these CI workflow benefits are realised, let’s take a step back, and describe Docker Compose in a little more detail.

    What is Docker Compose?

    Docker Compose is an orchestration tool for container-based applications, comprised of, or reliant on, multiple, loosely connected services. Unlike cluster-based orchestration tools, like Kubernetes or Swarm, Docker Compose is designed to orchestrate on a single Docker host. This provides a software developer with a significant benefit; the ability to better mimic a complete application environment, when developing their service or application. Additionally, because of the light nature of containers, and the orchestration capabilities inherent in Docker Compose, it enables developers to be more productive, as they iterate through a code-build-test cycle.

    A developer defines the components or dependencies of an application, and their configuration, using YAML syntax, in a file usually called docker-compose.yml. The service components, as they’re called, can then be manipulated individually, or collectively, through Docker Compose’s rich command line interface (CLI). Here’s an example, describing a simple application, that makes use of Redis:

    version: "3.3"
    
    services:
      web:
        build:
          context: .
          dockerfile: ./dev/Dockerfile.dev
        command: python app/app.py
        ports:
          - "5000:5000"
        volumes:
          - .:/code
    
      redis:
        image: redis:4.0.1
    

    The application comprises of two services; the redis service uses a stock Docker image, whilst the web service defines the ‘ingredients’ for creating an image based on some developed code. The characteristics, including version, of each service are defined in the image or build details, which means you never have to worry about installing the dependencies locally, or working against an incorrect version. In our example, we have Redis version 4.0.1, not version 4.0.0.

    The Docker Compose CLI allows you to; build the web service, bring up each individual service, or both services together, stop the services, re-build the web service after making some source code changes, scale the web service, and so on.

    Having the ability to define our intended environment, create it without installing any software, tear it down, and then faithfully re-create it at will, is a very powerful capability provided by Docker Compose. Without Docker Compose, this would be an arduous task, fraught with the risk associated with ‘dependency hell‘.

    How does Docker Compose help in continuous integration?

    This seems to make perfect sense in a development setting, but how does this extend to an automated CI workflow? Continuous integration generally starts with a developer’s commit to a source code repository, which then triggers a sequence of steps, which will (hopefully) end with a successfully built and tested artefact. In the case of a Dockerized service, this tends to be a Docker image. If a developer requires a comprehensive environment to provide a meaningful context for the application or service she is developing, then this also applies for a CI tool for building and testing new commits to a code repository. It naturally follows, that if Docker Compose provides a wealth of benefit to a developer, it can also provide that same benefit in a CI scenario.

    If a developer manually orchestrates a multi-service application through the Docker Compose CLI, how can we achieve the same objective using automation in a CI environment? Generally, CI tools allow for the execution of commands as part of a sequence of steps during a ‘build’. The Semaphore CI platform provides this capability, but goes a step further, by providing native Docker Compose CLI support for builds. In fact, Semaphore CI scans the cloned repository, and intelligently proposes CI build steps based on its content, including the execution of pertinent Docker Compose commands, such as docker compose build. Hence, the alleviation from dependency hell afforded by Docker Compose in a development setting, can also be made available in a CI scenario.

    A CI workflow, however, is not just concerned with successfully building an artefact. Generally, a CI workflow will also include steps to perform various automated tests, in order to provide confidence in new code that has been integrated into an application. Docker Compose can add considerable value here, too. The configuration of a service may need to change subtly in a testing scenario. For example, in a development environment, with interpreted code, we might want to mount the source code from a developer’s host, into the service container, which will enable the developer to get instant feedback on code changes. In a testing scenario, however, we will probably want to copy the code into the image used to derive the service container. Other examples might include changing the port or IP address of the location of a service which is to be consumed by the service under test, depending on the context; development or test.

    Multiple YAML configuration files can be defined as arguments to a Docker Compose CLI command, which allows the defined services to be altered or augmented according to file precedence. For example, consider the following, simple service defined in a YAML file called docker-compose.yml:

    services:
      web:
        image: web:1.0
        ports:
          - "5000:5000"
    

    This service can be amended by a subsequent definition in a file called, for example, docker-compose.test.yaml:

    services:
      web:
        build:
          context: .
          dockerfile: Dockerfile.test
        image: web:${tag:-latest}
    

    The service definition defined in file docker-compose.yml, is altered according to the service definition in the file docker-compose.test.yaml. Non-conflicting service attributes, and other service definitions, are left unchanged. The command to establish this service would be something like this, with the definition in the docker-compose.test.yaml file taking precedence over that in the docker-compose.yml file, because it is defined after the docker-compose.yml file:

    $ docker compose -f docker-compose.yml -f docker-compose.test.yml up web -d
    

    Assuming we have tests defined as part of the Docker image defined by Dockerfile.test, those tests are located in a derived container’s filesystem, and can be invoked inside the container using another Docker Compose CLI sub-command:

    $ docker compose exec container_name ./test_runner_script.sh
    

    The real power of this capability to amend service definitions, however, comes from the ability to add a completely new set of services to the environment, just for testing purposes. The ‘testing’ services can be used during the CI workflow, to invoke containers derived from specially prepared Docker images, for performing tests against the ‘application’ services. Defining them in a different YAML file, has the advantage of separating out the testing services from those of development, and even QA and/or production configurations, if required.

    So, with a minimal amount of work, it’s possible to take an existing definition of an application environment that’s used in development, and transpose it into a CI environment for automated build and test. The flexibility afforded by Docker Compose and its configuration capabilities, allows a complete application environment to be built and thoroughly tested, with the option of having the resultant artifacts pushed, as a Docker image, to a Docker registry.

    Summary

    Whilst not all software applications are suitable for deployment in a container where Docker is employed as a delivery vehicle, Docker Compose can bring substantial benefits. Those benefits are not constrained to a development environment alone, and can be extrapolated and extended, into a CI workflow. It enables an application to be built, and then tested against an authentic, representative environment, which can be created, and faithfully and accurately re-created at will. It’s just as well, as we’re encouraged to merge code early and often, and without this level of automation, life would be very difficult indeed!

    If you’d like to continuously deliver your applications made with Docker, sign up for Semaphore here to try Semaphore’s Docker platform with full layer caching for tagged Docker images.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Avatar
    Writen by: