Continuous deployment of a golden image with packer and semaphore

Continuous Deployment of Golden Images with Packer and Semaphore

Learn how to use Packer to build an AWS AMI (Amazon Machine Image), configure it to run a Docker image, and continuously deploy the application using Semaphore.

Try Semaphore's Docker CI/CD platform with full layer caching for tagged Docker images.

Make CI/CD for Docker Easy

In this tutorial, we'll introduce you to Packer and its use cases. It covers using Packer to build an Amazon Machine Image (AMI) and deploying the application to AWS with a load balancer and autoscaling group.

Prerequisites

This tutorial involves a few mandatory tools and assumes a few things. You'll need the following to complete this tutorial:

  • A rough understanding of Docker and Docker containers,
  • Access to an AWS account,
  • A Semaphore CI account,
  • Familiarity with AWS concepts such as Amazon Machine Image (AMI), Elastic Loadbalancer (ELB), and Auto Scaling Groups (ASG),
  • Understanding of CloudFormation at a high level, and
  • Experience with some configuration management tool.

To get started with Packer, you just need some experience with writing code and installing packages. This should be enough to complete the Packer half of the tutorial. The above points are more relevant in the second half which leverages AWS concepts.

Packer

Packer is a tool for creating machine and container images for multiple platforms from a single source configuration.

Packer is a go-to tool for engineers working with infrastructure and deployments. It's used for building various images from a configuration file, and it's even more powerful when paired with a configuration management tools , e.g. Ansible, Chef, Puppet, SaltStack. Packer can build the following:

  • Images for AWS, Azure, Digital Ocean, and GCE,
  • OpenStack,
  • Docker,
  • VirtualBox,
  • VMWare, and
  • Parallels.

Packer supports building through file uploads, shell scripts, and has built-in support for various Chef modes, Puppet, Ansible playbooks, and other tools. Packer also includes "post processors", which can take actions on the artifact afterwards. You could use the VirtualBox builder to create an image, pass it through the Vagrant post-processor to make it work with Vagrant, then use the Vagrant Cloud post-processor to share for public consumption. Another example would be to start with Docker builder, then use post processors to tag the image, and push to a Docker registry.

Packer supports many use cases, but there are two key areas. The first one is building base images for your application infrastructure. You can use Packer to create an image that contains all the dependencies, monitoring software, and security patches required to run one or all your applications. Then, you can push the image out to your infrastructure and run a configuration management system on top for use case specific tweaking.

The second one is golden images, which are the focus of this article. A golden image is an immutable image tied to a specific software version — everything is baked in. Golden images don't change after they're built. This makes them trivially easy to scale out and deploy. The image should include scripts to start your software at boot, this way there's nothing to do besides starting the image. Beyond that, golden images fall into the "simplest thing that could possibly work" and may be reused across multiple applications without much worry.

Building Our First Image with Packer

We're going to build an AMI that runs a "Hello World" web application written in Node.js. We'll use Docker to run the Node.js application as a Docker container. Docker makes this tutorial more flexible, so it's easier to apply the steps mentioned here to your own application.

The source for the "Hello World" application in src/app.js is as follows:

    var express = require('express');
    var app = express();

    app.get('/', function(req, res) {
        res.send('Hello world!\n');
    });

    module.exports = app;

Here is script/start.js used for npm start:

    var app = require('../src/app');

    app.listen(process.env.PORT, '0.0.0.0', function() {
        console.log('Goliath Online!');
    });

The application is simple enough, so we can use the node:6-onbuild Docker image. This image automatically installs dependencies listed in in package.json. Here's the complete source.

Packer uses a JSON configuration file. The file lists which builders (e.g. AWS, VirtualBox, Docker), provisioners (e.g.file uploads, shell scripts), and post processors to use. We'll use AWS and the official Ubuntu 16.04 image and a simple shell script to kick off the build. This gives us room to grow as the steps become more complicated.

Now, on to the packer.json. We'll build this up in parts as we go. First, configure the builder to use AWS as follows:

    {
        "builders": [
            {
                "type": "amazon-ebs",
                "region": "ap-southeast-1",
                "source_ami": "ami-a1288ec2",
                "instance_type": "c4.large",
                "ssh_username": "ubuntu",
                "ami_name": "semaphore-packer-tutorial-{{timestamp}}",
                "associate_public_ip_address": true
            }
        ]
    }

The type key sets the builder. Packer supports multiple builders and can even build them in parallel. The remaining keys are AWS specific. Packer reads AWS keys and from environment variables by default, which is the reason why they are not included here. Use the region nearest to you. The source_ami must be available in that region. The value comes from the official Ubuntu EC2 images. The ami_name must be unique. Packer provides rudimentary templating, {{ }} denotes variables, and Packer provides timestamp. Make sure the ssh_username works for the source_ami, otherwise SSH will fail and nothing will work. Packer creates a temporary key pair for SSH access, so you don't need to provide one. All of this is just setup, nothing has been configured yet. We'll use provisioners for that.

shell is the simplest provisioner. Packer will copy the file to the relevant machine and execute it. You can do almost anything with a shell script, but they become clunky to manage. Ansible is easier for complex tasks, and it includes a lot of integrations. Packer also includes an Ansible provisioner, so there's no heavy lifting required for us. However, we do need to install ansible ourselves. This kills two birds with one stone. Firstly, it demonstrates the simplicity of the shell provisioner; secondly, it demonstrates Packer's flexibility.

Let's start off by writing a shell script and install ansible. Name this file install-ansible.sh:

    #!/usr/bin/env bash

    # NOTE -x flag. This will print every command run which makes
    # debugging much easier.
    set -xeuo pipefail

    apt-get update -y
    apt-get install -y python-pip libssl-dev

    # NOTE: there is a bug in 2.1 that improperly detects docker-py versions.
    # https://github.com/ansible/ansible-modules-core/issues/5422
    pip install 'ansible==2.0.2.0' 'docker-py>=1.7.0'

First, update apt cache, install the supporting Python packages, and install the appropriate pip packers. Now, add shell provisioner to packer.json:

    {
        "builders": [
            {
                "type": "amazon-ebs",
                "region": "ap-southeast-1",
                "source_ami": "ami-a1288ec2",
                "instance_type": "c4.large",
                "ssh_username": "ubuntu",
                "ami_name": "semaphore-packer-tutorial-{{timestamp}}",
                "associate_public_ip_address": true
            }
        ],
        "provisioners": [
            {
                "type": "shell",
                "script": "install-ansible.sh",
                "pause_before": "10s",
                "execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E {{ .Path }}"
            }
        ]
    }

There are few things to pay attention to here. We've added pause_before to mitigate a potential race condition. Packer polls for SSH access. This may create a race condition where the SSH daemon is up, but not all other system daemons have started. The execute_command uses Packer's templating to ensure the file is executable and invokes it via sudo.

We're just about ready to run packer build. Let's start off with a skeleton script in script/deploy that does that:

    #!/usr/bin/env bash

    set -euo pipefail

    main() {
        packer build packer.json
    }

    main "$@"

Make sure to chmod +x script/deploy, then give it a go. Packer will stream everything to standard out. When packer build completes, we'll have an AMI, but the AMI isn't useful yet. It has only ansible installed. We need to add source code and start a Docker container for that.

Adding Source Code

We need to copy our source code to the machine provisioned by Packer. We can accomplish this with a tar archive, a file provisioner, and user variables. We'll make a tar archive of the current directory. Then, we'll use the file provisioner to upload it the machine. We'll need a "user variable" for this. Packer supports user defined variables, which can be used in the template. We'll create the tar archive before running packer build. The tar files location is passed as a user variable.

Start by updating the script/deploy to create a tar file as follows:

    #!/usr/bin/env bash

    set -euo pipefail

    main() {
        local src_package=tmp/src.tar.gz

        mkdir -p "$(dirname "${src_package}")"

        tar -czf "${src_package}" .

        packer build -var "src_package=${src_package}" packer.json
    }

    main "$@"

Now, update packer.json as follows:

    {
        "builders": [
            {
                "type": "amazon-ebs",
                "region": "ap-southeast-1",
                "source_ami": "ami-a1288ec2",
                "instance_type": "c4.large",
                "ssh_username": "ubuntu",
                "ami_name": "semaphore-packer-tutorial-{{timestamp}}",
                "associate_public_ip_address": true
            }
        ],
        "provisioners": [
            {
                "type": "shell",
                "script": "install-ansible.sh",
                "pause_before": "10s",
                "execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E {{ .Path }}"
            },
            {
                "type": "file",
                "source": "{{ user `src_package` }}",
                "destination": "/tmp/app.tar.gz"
            }
        ]
    }

The file provisioner reads the source from the src_package variable — -var src_package=foo CLI option, and puts it in /tmp. Now that we have ansible and our source code, we're ready to build and start our application.

Starting the Application

We'll use Ansible to coordinate this more complex step. The high level process goes like this:

  1. Install Docker,
  2. Build a Docker image from the source tar file,
  3. Start a Docker container using the image, and
  4. Smoke test the running container.

We'll pass the tar file location to Ansible via the extra_vars CLI option. The Docker container will start the application on port 8080. First, update packer.json to run the ansible provisioner:

    {
        "builders": [
            {
                "type": "amazon-ebs",
                "region": "ap-southeast-1",
                "source_ami": "ami-a1288ec2",
                "instance_type": "c4.large",
                "ssh_username": "ubuntu",
                "ami_name": "semaphore-packer-tutorial-{{timestamp}}",
                "associate_public_ip_address": true
            }
        ],
        "provisioners": [
            {
                "type": "shell",
                "script": "install-ansible.sh",
                "pause_before": "10s",
                "execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E {{ .Path }}"
            },
            {
                "type": "file",
                "source": "{{ user `src_package` }}",
                "destination": "/tmp/app.tar.gz"
            },
            {
                "type": "ansible-local",
                "playbook_file": "configure-ami.yml",
                "extra_arguments": [
                    "--extra-vars 'src_package=/tmp/app.tar.gz'"
                ]
            }
        ]
    }

Note that the --extra-vars option set the tar location specified in the file provisioner. Now, create the configure-ami.yml Ansible playbook. Here's the complete playbook. This tutorial is not specifically about Ansible, so we won't cover it too much. The content should be self-explanatory with all of the comments included.

    ---
    - hosts: localhost
        connection: local
        gather_facts: false

        # Run all commands via sudo
        become: true

        # Define variables in this playbook. Here we just need a directory
        # to extract source code to
        vars:
            src_dir: "/opt/app"

        tasks:
            - name: Install Docker pre-res
                apt:
                    state: present
                    name: "{{ item }}"
                with_items:
                    - apt-transport-https
                    - ca-certificates

            - name: Set up Docker repo key
                command: apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

            - name: Set up Docker apt repo
                apt_repository:
                    state: present
                    repo: deb https://apt.dockerproject.org/repo ubuntu-xenial main
                    update_cache: yes

              # Remove any possible old and conflicting Docker package
            - name: Purge old Docker config files
                apt:
                    state: absent
                    name: lxc-docker
                    purge: yes

            - name: Install Docker
                apt:
                    state: present
                    name: docker-engine

            - name: Start Docker
                service:
                    state: started
                    name: docker

            - name: Create src code directory
                file:
                    state: directory
                    dest: "{{ src_dir }}"

            - name: Extract source code
                unarchive:
                    src: "{{ src_package }}"
                    dest: "{{ src_dir }}"

            - name: Build Docker image
                docker_image:
                    state: present
                    name: app
                    path: "{{ src_dir }}"

            - name: Start application container
                docker:
                    state: started
                    name: app
                    image: app
                    # Tell docker to restart the container if it dies for some reason
                    restart_policy: always
                    # Send logs to journald since Ubuntu 16.04 uses systemd
                    log_driver: journald
                    # Set the PORT environment variable required for script/start.js
                    env:
                        PORT: "8080"
                    # NOTE: this module requires setting expose and ports
                    # together. This is confusing compared to the normal Docker
          # CLI.
                    expose:
                        - "8080"
                    ports:
                        - "8080:8080"

            - name: Wait for the container to start
                pause:
                    seconds: 10

                # Run a smoke test. This ensures the container is running and accepting
                # traffic on the inended port. If this passes, the AMI should be good to
                # go
            - name: Smoke test running container
                uri:
                    url: http://localhost:8080
                    method: GET
                    status: 200

Run script/deploy and the image should be good go. Packer shows the final AMI in the output. You can take that AMI and launch an instance, but that's a manual process, so let's automate it.

Deployment

Packer simply makes images. It's our responsibility to deploy them, and that's exactly what we're going to do here. Golden images are easy to deploy and scale because everything is included in the image. Let's deploy our "Hello World" application behind a load balancer and in an autoscaling group. We'll accomplish this with CloudFormation and Ansible. The CloudFormation template configures an Elastic LoadBalancer, Autoscaling Group, Launch Configuration, and security groups. Ansible will create/update the CloudFormation stack, and then run a smoke test against the deployed stack.

Let's start with the CloudFormation template. It requires the following input parameters:

  • The AMI to use,
  • The port application instances receive traffic on,
  • Path to use for health check,
  • A Key pair name for SSH access, and
  • Optional min and max instances in the auto scaling group (1 is the default).

The template creates a security group for each application instance. The security group allows SSH access from anywhere and HTTP traffic from the ELB. Then an Autoscaling Group and Launch Configuration are created and associated with the ELB. The ELB to instance traffic is configured based on the health check and port input parameters. The stack outputs the full URL to the application. You can view the complete CloudFormation template in the source.

Now, on to the deploy script itself. We'll redirect the packer build output to a file to capture the AMI. Then, we'll pass that value to Ansible for deployment via CloudFormation. Here's the updated script/deploy:


    #!/usr/bin/env bash

    set -euo pipefail

    main() {
        local log_file ami src_package=tmp/src.tar.gz

        log_file="$(mktemp)"

        mkdir -p "$(dirname "${src_package}")"

        tar -czf "${src_package}" .

        packer build -color=false -var "src_package=${src_package}" packer.json | tee "${log_file}"

        ami="$(tail -n 1 "${log_file}" | cut -d ' ' -f 2)"

        ansible-playbook deploy.yml --extra-vars "ami=${ami}"
    }

    main "$@"

The new deploy.yml Ansible playbook should contain the following:

    ---
    - hosts: localhost
        connection: local
        gather_facts: false
        vars:
            region: ap-southeast-1
            port: "8080"
            healthcheck_path: "/"
            key_name: semaphore-packer-tutorial

        tasks:
            - name: Deploy CloudFormation stack
                cloudformation:
                    state: present
                    region: "{{ region }}"
                    stack_name: semaphore-packer-tutorial
                    disable_rollback: true
                    template: cloudformation.yml
                    template_parameters:
                        AMI: "{{ ami }}"
                        HealthCheckPath: "{{ healthcheck_path }}"
                        ApplicationPort: "{{ port }}"
                        KeyName: "{{ key_name }}"
                register: cf

            - name: Wait for switch over
                pause:
                    seconds: 15

            - name: Test deployment
                uri:
                    url: "{{ cf.stack_outputs.URL }}"
                    status: 200

The region variable matches the region used in packer.json. The port variable matches the value in the configure-ami.yml playbook. You must create the key pair in the same region. My key pair is semaphore-packer-tutorial.

Run script/deploy. The process may take 10 minutes or so the first time. Everything should work as expected and the "Hello World" application is available on the public internet.

Continuous Deployment with Semaphore

For continuous deployment, we'll be using Semaphore — a hosted continuous integration and deployment service. Once we link it to our repo, it will get notified of our subsequent pushes, and will proceed to pull our code to one of its machines and run the deployment commands that we've specified.

The first step is to sign up on Semaphore and add the project to your account. Semaphore will show you the projects related to your account (e.g. from Github), and you should select the project that you want to deploy. After selecting your project, Semaphore will run an analysis of the project and generate a building plan for your project. Building on Semaphore means running user-written tests against the code of the user's project, which can either pass or fail. This process is beyond the scope of this text, so we'll make sure here that the builds are simple enough to pass. To do so, we'll simplify the build plan.

The build plan consists of build commands, and we will proceed to remove them, so that we always get a passing build. This is certainly not a good practice, but for the sake of this tutorial we just want to skip the testing phase. We'll completely remove the setup group of commands by clicking the 'X' next to 'Edit Job'. After that, we should make sure that Job #1 has only a single command by editing it into something neutral, like echo 0.

After this, click 'Build With These Settings'. This should run the build and the build should pass.

After this, click on the name of your project in order to move to the page of your project. Here we will see the 'Branches' section, which shows the current branches of our repo, and their build status (i.e. green for passing, red for failed). Semaphore automatically tracks the branches of a repo and runs builds on them after every git push.

For continuous deployment, we'll set up automatic deployment of certain branches (e.g. the 'master' branch) after they've received a git push resulting in a passing build. This means that we will only automatically deploy new versions of the branches that have passed testing, provided that we wrote tests for our code.

For every build or deploy, Semaphore launches a Linux container that pulls the code from our repository and runs the specified shell commands in the container. Before we proceed to deploying, we need to make sure that Semaphore has the correct tools and access for deployment available inside the container. When it comes to necessary tools, we'll add the following script to our project, as script/bootstrap-semaphore:

#!/usr/bin/env bash
set -euo pipefail
main() {
  curl "https://releases.hashicorp.com/packer/0.11.0/packer_0.11.0_linux_amd64.zip" > packer.zip
  unzip packer.zip -d "/usr/bin"
  pip install ansible boto
}
main "$@"

The script will now be available when we deploy our code. Don't forget to push your project to your repo after adding the script. Next, we need to give Semaphore access to our AWS resources. What we need to do here is to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables of our project on Semaphore. Environment variables can be added to Semaphore projects through the 'Project Settings' page. The AWS credentials that we are setting as environment variables are linked to an AWS IAM user, and they should always be role-specific with appropriate permissions. If you don't have a dedicated CI role already, you can create a new IAM user with the PowerUserAccess policy. PowerUserAccess gives access to all AWS services besides user access and permission management (IAM).

We're now ready to proceed with deployment. Let's click on 'Set Up Deployment' on our project page on Semaphore. At this point, we will be led through a couple of stages of setting up our deployment. First, we'll be asked whether to deploy automatically from a branch after every passing build of that branch, or manually. Let's pick automatic. After that, we get to pick the branch from which to deploy. Let's pick 'master', or a similar main branch. Next, we'll set our deploy commands. We'll write the following:

script/bootstrap-semaphore
script/deploy

This means that our deployments will run these two commands. We'll skip adding the SSH key. Finally, we'll set a name for the server that we're deploying to.

Now you can push to master, and after a passed build your deployment should start. Semaphore will show you the results of running your deploy commands, and let you know if your deployment was successful.

You can check out the complete source code for all files involved.

Wrap Up

This tutorial has covered a lot of ground. Let's reiterate what we've covered:

  • Introduced Packer,
  • Built an AWS AMI using Packer,
  • Configured that AMI to run a Docker image using Packer's Ansible provisioner,
  • Used a CloudFormation template to create an ElasticLoad Balancer, Autoscaling Group, and Launch Configuration to deploy the application via the AMI, and
  • Used an Ansible playbook to build the AMI and deploy the CloudFormation stack on SemaphoreCI for continuous deployment.

Good luck out there and happy shipping! If you have any questions or comments, feel free to leave them in the section below.

94378c403019af23a28b08447a34b8e0
Adam Hawkins

Traveller, trance addict, automation, and continuous deployment advocate. I lead the SRE team at saltside and blog on DevOps for Slashdeploy. Tweet me @adman65.

on this tutorial so far.
User deleted author {{comment.createdAt}}

Edited on {{comment.updatedAt}}

Cancel

Sign In You must be logged in to comment.