In this tutorial, you will learn how to dockerize a Ruby on Rails application. The application we’re going to build will make use of PostgreSQL, Redis and Sidekiq.

We’ll also be using Unicorn in both development and production. If you would prefer to use Puma or something else, this shouldn’t be an issue.

After reading this article, you will have a basic idea of what Docker is, how it can help you as a developer, and how it makes application development and deployment more streamlined.

What is Docker?

Docker allows you to package up an application or service with all of its dependencies into a standardized unit. This unit is typically labeled as a Docker image.

Everything the application needs to run is included. The Docker image contains the code, runtime, system libraries and anything else you would install on a server to make it run if you weren’t using Docker.

What Makes Docker Different from a Virtual Machine

You may have used Vagrant, VirtualBox, or VMWare to run a virtual machine. They allow you to isolate services, but there are a few major differences which make virtual machines much less efficient.

For starters, you need to have an entire guest operating system for each application you want to isolate. It also takes many seconds to boot-up a virtual machine, and each VM can potentially be gigabytes in size.

Docker containers share your host’s kernel, and isolation is done using cgroups and other linux kernel libraries. Docker is very lightweight — it typically takes 50 milliseconds for a container to start, and running a container doesn’t use much disk space at all.

Continuous Delivery with Kubernetes

What’s the Bottom Line?

What if you could develop your Rails application in isolation on your work station without using RVM or chruby, and changing Ruby versions were super easy?

What if as a consultant or freelancer with 10 Rails projects, you had everything you needed isolated for each project without needing to waste precious SSD disk space?

What if you could spin up your Rails, PostgreSQL, Redis, and Sidekiq stack in about 3 seconds?

What if you wanted to share your project on GitHub and other developers only had to run a single command to get everything running in minutes?

All of this and much more is possible thanks to Docker.

The Benefits of Using Docker

If you’re constantly looking for ways to improve your productivity and make the overall software development experience better, you’ll appreciate the following 5 key benefits Docker offers:

1. Cross Environment Consistency

Docker allows you to encapsulate your application in such a way that you can easily move it between environments. It will work properly in all environments and on all machines capable of running Docker.

2. Expand Your Development Team Painlessly

You should not have to hand over a 30 page document to a new developer to teach them how to set up your application so they can run it locally. This process can take all day or longer, and the new developer is bound to make mistakes.

With Docker all developers in your team can get your multi-service application running on their workstation in an automated, repeatable, and efficient way. You just run a few commands, and minutes later it all works.

3. Use Whatever Technology Fits Best

If you’re a startup or a shop that uses only one language, you could be putting yourself at a disadvantage. Since you can isolate an application in a Docker container, it becomes possible to broaden your horizons as a developer by experimenting with new languages and frameworks.

You no longer have to worry about other developers having to set up your technology of choice. You can hand them a Docker image and tell them to run it.

4. Build Your Image Once and Deploy It Many Times

Since your applications are inside of a pre-built Docker image, they can be started in milliseconds. This makes it very easy to scale up and down.

Time consuming tasks such as installing dependencies only need to be run once at build time. Once the image has been built, you can move it around to many hosts.

This not only helps with scaling up and down quickly, but it also makes your deployments more predictable and resilient.

5. Developers and Operation Managers Can Work Together

Docker’s toolset allows developers and operation managers to work together towards the common goal of deploying an application.

Docker acts as an abstraction. You can distribute an application, and members of another team do not need to know how to configure or set up its environment.

It also becomes simple to distribute your Docker images publicly or privately. You can keep tabs of what changed when new versions were pushed and more.


You will need to install Docker. Docker can be run on most major Linux distributions, and there are tools to let you run it on OSX and Windows too.

This tutorial focuses on Linux users, but it will include comments when things need to be adjusted for OSX or Windows.

Installing Docker

Follow one of the installation guides below for your operating system:

Before proceeding, you should have Docker installed and you need to have completed at least the hello world example included in one of the installation guides above.

This guide expects you to have Docker 1.9.x installed, as it uses features introduced in Docker 1.9.

The Rails Application

The application we’re going to build will be for the latest version of Rails 4 which happens to be 4.2.5 at the time of writing.

However, all of the concepts described below can be used for Rails 5 when it comes out.

Generating a New Rails Application

We’re going to generate a new Rails project without even needing Ruby installed on our work station. We can do this by using the official Rails Docker image.

Creating a Dummy Project

First, let’s create a dummy project:

# OSX/Windows users will want to remove --­­user "$(id -­u):$(id -­g)"
docker run -it --rm --user "$(id -u):$(id -g)" \
  -v "$PWD":/usr/src/app -w /usr/src/app rails:4 rails new --skip-bundle dummy

Running it for the first time will take a while because Docker needs to pull the image. The command above will create the application on our work station.

The -v "$PWD":/usr/src/app -w /usr/src/app segment connects our local working directory with the /usr/src/app path within the Docker image. This is what allows the container to write the Rails scaffolding to our work station.

The --user flag ensures that you own the files instead of root.

The rails new --skip-bundle dummy bit should look familiar if you’re a Rails developer. That’s the command we’re passing to the Rails image.

You can learn more about the official Rails image located on the Docker HUB.

Deleting the Project

We created the application in the home directory:

nick@isengard:~ $ ls -la
drwxr-xr-x  12 nick nick  4096 Dec 11 09:48 dummy

Delete it like using the following command:

rm -rf dummy

Creating the Real Project

We’ll run the same command as last time, but we will change the name of the project. Note how fast the project gets generated this time around:

# OSX/Windows users will want to remove --­­user "$(id -­u):$(id -­g)"
docker run -it --rm --user "$(id -u):$(id -g)" \
  -v "$PWD":/usr/src/app -w /usr/src/app rails:4 rails new --skip-bundle drkiq

It’s basically the same as creating a new Rails project without using Docker.

Setting Up a Strong Base

Before we start adding Docker-specific files to the project, let’s add a few gems to our Gemfile and make a few adjustments to our application to make it production ready.

Modifying the Gemfile

Add the following lines to the bottom of your Gemfile:

gem 'unicorn', '~> 4.9'
gem 'pg', '~> 0.18.3'
gem 'sidekiq', '~> 4.0.1'
gem 'redis-rails', '~> 4.0.0'

Also, make sure to remove the sqlite gem near the top.

DRYing Out the Database Configuration

Change your config/database.yml to look like this:


  url: <%= ENV['DATABASE_URL'].gsub('?', '_development?') %>

  url: <%= ENV['DATABASE_URL'].gsub('?', '_test?') %>

  url: <%= ENV['DATABASE_URL'].gsub('?', '_staging?') %>

  url: <%= ENV['DATABASE_URL'].gsub('?', '_production?') %>

We will be using environment variables to configure our application. The above file allows us to use the DATABASE_URL, while also allowing us to name our databases based on the environment in which they are being run.

DRYing Out the Secrets File

Change your config/secrets.yml to look like this:


development: &default
  secret_key_base: <%= ENV['SECRET_TOKEN'] %>

  <<: *default

  <<: *default

  <<: *default

YAML is a markup language. If you’ve never seen this syntax before, it involves setting each environment to use the same SECRET_TOKEN environment variable.

This is fine, since the value will be different in each environment.

Editing the Application Configuration

Add the following lines to your config/application.rb:

# ...

module Drkiq
  class Application < Rails::Application
    # We want to set up a custom logger which logs to STDOUT.
    # Docker expects your application to log to STDOUT/STDERR and to be ran
    # in the foreground.
    config.log_level = :debug
    config.log_tags  = [:subdomain, :uuid]
    config.logger    =

    # Since we're using Redis for Sidekiq, we might as well use Redis to back
    # our cache store. This keeps our application stateless as well.
    config.cache_store = :redis_store, ENV['CACHE_URL'],
                         { namespace: 'drkiq::cache' }

    # If you've never dealt with background workers before, this is the Rails
    # way to use them through Active Job. We just need to tell it to use Sidekiq.
    config.active_job.queue_adapter = :sidekiq

    # ...

Creating the Unicorn Config

Next, create the config/unicorn.rb file and add the following content to it:

# Heavily inspired by GitLab:

# Go with at least 1 per CPU core, a higher amount will usually help for fast
# responses such as reading from a cache.
worker_processes ENV['WORKER_PROCESSES'].to_i

# Listen on a tcp port or unix socket.
listen ENV['LISTEN_ON']

# Use a shorter timeout instead of the 60s default. If you are handling large
# uploads you may want to increase this.
timeout 30

# Combine Ruby 2.0.0dev or REE with "preload_app true" for memory savings:
preload_app true
GC.respond_to?(:copy_on_write_friendly=) && GC.copy_on_write_friendly = true

# Enable this flag to have unicorn test client connections by writing the
# beginning of the HTTP headers before calling the application. This
# prevents calling the application for connections that have disconnected
# while queued. This is only guaranteed to detect clients on the same
# host unicorn runs on, and unlikely to detect disconnects even on a
# fast LAN.
check_client_connection false

before_fork do |server, worker|
  # Don't bother having the master process hang onto older connections.
  defined?(ActiveRecord::Base) && ActiveRecord::Base.connection.disconnect!

  # The following is only recommended for memory/DB-constrained
  # installations. It is not needed if your system can house
  # twice as many worker_processes as you have configured.
  # This allows a new master process to incrementally
  # phase out the old master process with SIGTTOU to avoid a
  # thundering herd (especially in the "preload_app false" case)
  # when doing a transparent upgrade. The last worker spawned
  # will then kill off the old master process with a SIGQUIT.
  old_pid = "#{server.config[:pid]}.oldbin"
  if old_pid !=
      sig = ( + 1) >= server.worker_processes ? :QUIT : :TTOU
    rescue Errno::ENOENT, Errno::ESRCH

  # Throttle the master from forking too quickly by sleeping. Due
  # to the implementation of standard Unix signal handlers, this
  # helps (but does not completely) prevent identical, repeated signals
  # from being lost when the receiving process is busy.
  # sleep 1

after_fork do |server, worker|
  # Per-process listener ports for debugging, admin, migrations, etc..
  # addr = "{9293 +}"
  # server.listen(addr, tries: -1, delay: 5, tcp_nopush: true)

  defined?(ActiveRecord::Base) && ActiveRecord::Base.establish_connection

  # If preload_app is true, then you may also want to check and
  # restart any other shared sockets/descriptors such as Memcached,
  # and Redis. TokyoCabinet file handles are safe to reuse
  # between any number of forked children (assuming your kernel
  # correctly implements pread()/pwrite() system calls).

Creating the Sidekiq Initialize Config

Now you can also create the config/initializers/sidekiq.rb file and add the following code to it:

sidekiq_config = { url: ENV['JOB_WORKER_URL'] }

Sidekiq.configure_server do |config|
  config.redis = sidekiq_config

Sidekiq.configure_client do |config|
  config.redis = sidekiq_config

Creating the Environment Variable File

Last but not least, you need to create the .drkiq.env file and add the following code to it:

# You would typically use rake secret to generate a secure token. It is
# critical that you keep this value private in production.

# Unicorn is more than capable of spawning multiple workers, and in production
# you would want to increase this value but in development you should keep it
# set to 1.
# It becomes difficult to properly debug code if there's multiple copies of
# your application running via workers and/or threads.

# This will be the address and port that Unicorn binds to. The only real
# reason you would ever change this is if you have another service running
# that must be on port 8000.

# This is how we'll connect to PostgreSQL. It's good practice to keep the
# username lined up with your application's name but it's not necessary.
# Since we're dealing with development mode, it's ok to have a weak password
# such as yourpassword but in production you'll definitely want a better one.
# Eventually we'll be running everything in Docker containers, and you can set
# the host to be equal to postgres thanks to how Docker allows you to link
# containers.
# Everything else is standard Rails configuration for a PostgreSQL database.

# Both of these values are using the same Redis address but in a real
# production environment you may want to separate Sidekiq to its own instance,
# which is why they are separated here.
# We'll be using the same Docker link trick for Redis which is how we can
# reference the Redis hostname as redis.

The above file allows us to configure the application without having to dive into the application code. This is a very important step to making your application production ready.

This file would also hold information like mail login credentials or API keys. You should also add this file to your .gitignore, so go ahead and do that now.

Dockerizing Your Rails Application

We’ll need to add 3 files to the project, but only the first one is mandatory.

Creating the Dockerfile

Create the Dockerfile file and add the following content to it:

# Use the barebones version of Ruby 2.2.3.
FROM ruby:2.2.3-slim

# Optionally set a maintainer name to let people know who made this image.
MAINTAINER Nick Janetakis <>

# Install dependencies:
#   - build-essential: To ensure certain gems can be compiled
#   - nodejs: Compile assets
#   - libpq-dev: Communicate with postgres through the postgres gem
#   - postgresql-client-9.4: In case you want to talk directly to postgres
RUN apt-get update && apt-get install -qq -y build-essential nodejs libpq-dev postgresql-client-9.4 --fix-missing --no-install-recommends

# Set an environment variable to store where the app is installed to inside
# of the Docker image.

# This sets the context of where commands will be ran in and is documented
# on Docker's website extensively.

# Ensure gems are cached and only get updated when they change. This will
# drastically increase build times when your gems do not change.
COPY Gemfile Gemfile
RUN bundle install

# Copy in the application code from your work station at the current directory
# over to the working directory.
COPY . .

# Provide dummy data to Rails so it can pre-compile assets.
RUN bundle exec rake RAILS_ENV=production DATABASE_URL=postgresql://user:pass@ SECRET_TOKEN=pickasecuretoken assets:precompile

# Expose a volume so that nginx will be able to read in assets in production.

# The default command that gets ran will be to start the Unicorn server.
CMD bundle exec unicorn -c config/unicorn.rb

The above file creates the Docker image. It can be used in development as well as production or any other environment you want.

Creating a dockerignore File

Next, create the .dockerignore file and add the following content to it:


This file is similar to .gitgnore. It will exclude matching files and folders from being built into your Docker image.

Creating the Docker Compose Configuration File

Next, we will create the docker-compose.yml file and copy the following content into it:

  image: postgres:9.4.5
    POSTGRES_USER: drkiq
    POSTGRES_PASSWORD: yourpassword
    - '5432:5432'
    - drkiq-postgres:/var/lib/postgresql/data

  image: redis:3.0.5
    - '6379:6379'
    - drkiq-redis:/var/lib/redis/data

  build: .
    - postgres
    - redis
    - .:/drkiq
    - '8000:8000'
    - .drkiq.env

  build: .
  command: bundle exec sidekiq -C config/sidekiq.yml
    - postgres
    - redis
    - .:/drkiq
    - .drkiq.env

If you’re using Linux, you will need to download Docker Compose. You can grab the latest 1.5.x release from the docker/compose GitHub repo.

If you’re using OSX or Windows and are using the Docker Toolbox, then you should already have this tool.

What is Docker Compose?

Docker Compose allows you to run 1 or more Docker containers easily. You can define everything in YAML and commit this file so that other developers can simply run docker-compose up and have everything running quickly.

Additional Information

Everything in the above file is documented on Docker Compose‘s website. The short version is:

  • Postgres and Redis use Docker volumes to manage persistence
  • Postgres, Redis and Drkiq all expose a port
  • Drkiq and Sidekiq both use volumes to mount in app code for live editing
  • Drkiq and Sidekiq both have links to Postgres and Redis
  • Drkiq and Sidekiq both read in environment variables from .drkiq.env
  • Sidekiq overwrites the default CMD to run Sidekiq instead of Unicorn.

Creating the Volumes

In the docker-compose.yml file, we’re referencing volumes that do not exist. We can create them by running:

docker volume create --name drkiq-postgres
docker volume create --name drkiq-redis

When data is saved in PostgreSQL or Redis, it is saved to these volumes on your work station. This way, you won’t lose your data when you restart the service because Docker containers are stateless.

Running Everything

Now it’s time to put everything together and start up our stack by running the following:

docker-compose up

The first time this command runs it will take quite a while because it needs to pull down all of the Docker images that our application requires.

This operation is mostly bound by network speed, so your times may vary.

At some point, it’s going to begin building the Rails application. You will eventually see the terminal output, including lines similar to these:

postgres_1  | ...
redis_1     | ...
drkiq_1     | ...
sidekiq_1   | ...

You will notice that the drkiq_1 container threw an error saying the database doesn’t exist. This is a completely normal error to expect when running a Rails application because we haven’t initialized the database yet.

Initialize the Database

Hit CTRL+C in the terminal to stop everything. If you see any errors, you can safely ignore them.

Run the following commands to initialize the database:

# OSX/Windows users will want to remove --­­user "$(id -­u):$(id -­g)"
docker­-compose run --­­user "$(id ­-u):$(id -­g)" drkiq rake db:reset
docker­-compose run --­­user "$(id ­-u):$(id -­g)" drkiq rake db:migrate

The first command should warn you that db/schema.rb doesn’t exist yet, which is normal. Run the second command to remedy that. It should run successfully.

If you head over to the db folder in your project, you should notice that there is a schema.rb file and that it’s owned by your user.

You may also have noticed that running either of the commands above also started Redis and PostgreSQL automatically. This is because we have them defined as links. docker-compose is smart enough to start dependencies.

Running Everything, Round 2

Now that our database is initialized, try running the following:

docker-compose up

On a quad core i5 with an SSD everything loaded in about 3 seconds.

Testing It Out

Head over to http://localhost:8000/. If you’re using Docker Toolbox, then you should go to the IP address that was given to you by the Docker terminal.

You should be greeted with the typical Rails introduction page.

Working with the Rails Application

Now that we’ve Dockerized our application, let’s start adding features to it to exercise the commands you’ll need to run to interact with your Rails application.

Right now the source code is on your work station, and that source code is being mounted into the Docker container in real time through a volume.

This means that if you were to edit a file, the changes would take effect instantly, but right now we have no routes or any CSS defined to test this.

Generating a Controller

Run the following command to generate a Pages controller with a home action:

# OSX/Windows users will want to remove --­­user "$(id -­u):$(id -­g)"
docker-­compose run --­­user "$(id -­u):$(id -­g)" drkiq rails g controller Pages home

In a second or two, it should provide everything you would expect when generating a new controller.

This type of command is how you’ll run future Rails commands. If you wanted to generate a model or run a migration, you would run them in the same way.

Modify the Routes File

Remove the get 'pages/home' line near the top and replace it with the following:

root 'pages#home'

If you go back to your browser, you should see the new home page we have set up.

Adding a New Job

Use the following to add a new job:

# OSX/Windows users will want to remove --­­user "$(id -­u):$(id -­g)"
docker-­compose run --­­user "$(id -­u):$(id -­g)" drkiq rails g job counter

Modifying the Counter Job

Next, replace the perform function to look like this:

def perform(*args)
  21 + 21

Modifying the Pages Controller

Replace the home action to look like this:

def home
  # We are executing the job on the spot rather than in the background to
  # exercise using Sidekiq in a trivial example.
  # Consult with the Rails documentation to learn more about Active Job:
  @meaning_of_life = CounterJob.perform_now

Modifying the Home View

The next step is to replace the app/views/pages/home.html.erb file to look as follows:

<h1>The meaning of life is <%= @meaning_of_life %></h1>

Restart the Rails Application

You need to restart the Rails server to pickup new jobs, so hit CTRL+C to stop everything, and then run docker-compose up again.

If you reload the website you should see the changes we made.

Experimenting on Your Own

Here are three things you should do to familiarize yourself with your new application:

  • Changing the h1 color to something other than black
  • Generating a model and then running a migration
  • Adding a new action and route to the application

All of these things can be done without having to restart anything, so feel free to check out the changes after you have performed each one.

Continuous Integration for Docker projects on Semaphore

You can easily set up continuous integration for your Docker projects on Semaphore.

First thing you’ll need to do is sign up for a free Semaphore account. Then, you should add your Docker project repository. If your project has a Dockerfile or docker-compose.yml, Semaphore will automatically recommend the platform with Docker support.

Now you can run your images just as you would on your local machine, for example:

  docker build <your-project>
  docker run <your-project>

With your Docker container up and running, you can now run all kinds of tests against it. To learn more about using Docker on Semaphore, you can check out Semaphore’s Docker documentation pages.

Where to Go Next?

Congratulations! You’ve finished dockerizing your Ruby on Rails application.

If you would like to learn more about Docker and how to deploy a Ruby on Rails application to production in an automated way, you can follow the link below to get a 20% discount on Nick’s new course Docker for DevOps: From development to production.

P.S. Would you like to learn how to build sustainable Rails apps and ship more often? We’ve recently published an ebook covering just that — “Rails Testing Handbook”. Learn more and download a free copy.

Read next: