NEW Run your Ruby tests in a few minutes and ship faster with Boosters, one-click auto parallelization · Learn more…

Semaphore Blog

News and updates from your friendly continuous integration and deployment service.

Hands-free deployment of static websites

Get future posts like this one in your inbox.

Follow us on

technical drawing

Static websites seem to be going through a period of renaissance. About fifteen years ago, webmasters (the people who you could always contact) manually managed all HTML files for a website. Due to lots of duplicate content, “Find & Replace in Files” was the hottest feature in text editors. Version control was mostly unheard of and updates were done over FTP.

In the mid-2000s, the most commmon way for creating a simple website that includes some pages, news, blog etc was to use a CMS. Thus maintaining every one of these websites included managing upgrades of the CMS as well as the database which was driving the content.

When Tom Preston-Werner launched Jekyll, a static site generator, in 2008 many programmers started porting their blogs and personal websites to it. It became obvious then: if all my content is text, let’s generate it from some readable set of files, written in my favorite editor, tracked in version control. Around the same time, the browser became an alternative operating system with a rich API and new capabilities in scripting and rendering. Today many web development and design shops use static site generators to create websites that do not have the complex requirements like a web application.

Deploying Semaphore documentation

Two important parts of Semaphore are made with Middleman, another static site generator: this blog and our documentation site. I’ll show you how we use Semaphore to automatically publish new content.

The Git repository is open source and on GitHub. We’re using the S3Sync plugin to deploy the generated site to Amazon S3.

We’ve added the project on Semaphore to build with the following commands:

bundle install --path .bundle
bundle exec middleman build
bundle exec middleman s3_sync

To make the S3 plugin work, we also need valid S3 credentials. We do not store that file in the repository, but have instead defined a custom configuration file on Semaphore which becomes available during the build:

File with sensitive credentials which Semaphore will insert during the build.

Now I can either edit the site files locally and git push, or use GitHub’s web interface to create and edit files. Either way, Semaphore will run the commands above after every push and make sure that the latest content is continuously deployed to

In this case there isn’t a test command that makes sense so we do the deployment in builds. In cases where you can separate the build and deploy steps, you may find Semaphore’s “Generic” deploy strategy useful, a small feature we launched today.

The Generic strategy lets you specify any deployment steps to run after successful builds, with a deploy SSH key being completely optional.

Announcing Support for JavaScript and Node.js Projects on Semaphore

We are very excited to announce that Semaphore now has full support for testing and deploying JavaScript projects via Node.js.

Semaphore started out with the goal to provide the best CI experience for private Ruby projects. Over time we’ve been gradually making it possible to work with projects in other languages by expanding our build platform. The web browser is the most popular platform today and the sheer number of projects in JavaScript on GitHub reflects that, as well as the number of requests by our existing users. Many were already testing their JavaScript projects on Semaphore, so it was a logical next step for us to upgrade its’ status.

Setting up your Node.js project

The steps for setting up your Node.js project on Semaphore are as simple as for setting up Ruby a project.

During the analysis of your repository Semaphore will recognize that it is a JavaScript project and preselect a version of Node.js to be used in builds. You can still change the target language and version manually of course. The suggested build commands will reflect your selection.

Semaphore can autoconfigure your JavaScript project for continuous integration.

In the build environment Semaphore has the Node Package Manager preinstalled, so you can easily install your dependencies specified in package.json, or install new global packages by using npm, for example:

npm install <some_cool_node_package>

If you do not have a Semaphore account, sign up and add your first JavaScript project today.

Available versions

Semaphore’s current build platform has the following versions of Node.js preinstalled:

  • 0.8.26
  • 0.10.24 (latest stable version)
  • 0.11.10 (latest unstable version)

For managing versions we use Node Version Manager. For easy switching, we have set the corresponding version aliases as:

  • 0.8 => 0.8.26
  • 0.10 => 0.10.24
  • 0.11.10 => 0.11

If you want to use another version, you can simply add a build setup command to install it:

nvm install <desired_node_version>

Semaphore also makes the following tools available:

Database access

If you are developing a Node.js project that works with a database, see our refreshed Database access documentation page for information about the engines available and required credentials.

For Rails apps with generate a working config/database.yml based on the selected database in project settings. We are open to identifying a similar common use case for Node.js apps. So if you have a Node.js project and work with a database, feel free to get in touch and tell us how you are configuring it.

Rails Testing Antipatterns: Models

This is the second post in the series about antipatterns in testing Rails applications. See part 1 for thoughts related to fixtures and factories.

Creating records when building would also work

It is common sense to say that with plain ActiveRecord or a factory library you can both create and only instantiate new records. However, probably because testing models so often needs records to be present in the database, I have seen myself and others sometimes forget the build-only option. It is however one of the key ways to making your test suite faster.

Consider this spec for full_name:

describe User do
  describe "#full_name" do

    before do
      @user = User.create(:first_name => "Johnny", :last_name => "Bravo")

    it "is composed of first and last name" do
      @user.full_name.should eql("Johnny Bravo")

It triggers an interaction with database and disk drive to test logic which does not require it at all. Using would suffice. Multiply that by the number of examples reusing that before block and all similar occurrences and you get a lot of overhead.

All model, no unit tests

MVC plus services is good enough for typical Rails apps when they’re starting out. Over time however the application can benefit from some domain specific classes and modules. Having all logic in models ties it to the database, eventually leading to tests so slow that you avoid running them.

If your experience with testing started with Rails, you may be associating unit tests with Rails models. However in “classical” testing terminology, unit tests are considered to be mostly tests of simple, “plain-old” objects. As described by the test pyramid metaphore, unit tests should be by far the most frequent and the fastest to run.

Not having any unit tests in a large application is not technically an antipattern in testing, but an indicator of a possible architectural problem.

Take for example a simple shopping cart model:

class ShoppingCart < ActiveRecord::Base
  has_many :cart_items

  def total_discount

The test would go something like this:

require "spec_helper"

describe ShoppingCart do

  describe "total_discount" do

    before do
      @shopping_cart = ShoppingCart.create

      @shopping_cart.cart_items.create(:discount => 10)
      @shopping_cart.cart_items.create(:discount => 20)

    it "returns the sum of all items' discounts" do
      @shopping_cart.total_discount.should eql(30)

Observe how the logic of ShoppingCart#total_discount actually doesn’t care where cart_items came from. They just need to be present and have a discount attribute.

We can refactor it to a module and put it in lib/. That would be a good first pass — merely moving a piece of code to a new file has benefits. I personally prefer to use modules only when I see behavior which will be shared by more than one class. So let’s try composition. Note that in real life you usually have more methods that can be grouped together and placed in a more sophisticated directory structure.

# lib/shopping_cart_calculator.rb
class ShoppingCartCalculator

  def initialize(cart)
    @cart = cart

  def total_discount

To test the logic defined in ShoppingCartCalculator, we now no longer need to work with the database, or even Rails at all:

require 'shopping_cart_calculator'

describe ShoppingCartCalculator do

  describe "#total_discount" do

    before do
      @cart = double
      cart_items = [stub(:discount => 10),
                    stub(:discount => 20)]
      @cart.stub(:cart_items) { cart_items }

      @calculator =

    it "returns the sum of all cart items' discounts" do
      @calculator.total_discount.should eql(30)

Notice how the spec for the calculator does not need require spec_helper any more. Requiring spec_helper in a spec file generally means that your whole Rails application needs to load before the first example runs. Depending on your machine and application size, running a single spec may then take from a few to 30+ seconds. This is avoided with use of application preloading tools such as Spork or Spring. It is nice however if you can achieve indepedence of them.

To get more in-depth with the idea I recommend watching the Fast Rails Tests talk by Corey Haines. Code Climate has a good post on practical ways to decompose fat ActiveRecord models.

Note that new Rails 4 apps are encouraged to use concerns, which are a handy way of extracting model code that DHH recommended a while ago.

The elegance of entire suite being extremely fast, not requiring spec_helper etc is secondary in my opinion. There are many projects that can benefit from this technique even partially applied, because they have all logic in models, as in thousands of lines of code that can be extracted into separate modules or classes. Also, do not interpret the isolated example above as a call to remove all methods from models in projects of all size. Develop and use your own sense when it is a good moment to perform such architecture changes.

Using let to set context

Using let(:model) (from RSpec) in model tests may lead to unexpected results. let is lazy, so it doesn’t execute the provided block until you use the referenced variable further in your test. This can lead to errors which can make you think that there is something wrong with the database, but of course it is not; the data is not there because the let block has not been executed yet. Non-lazy let! is an alternative, but it’s not worth the trouble. Simply use before blocks to initialize data.

describe ShoppingCart do

  describe ".empty_carts" do
    let(:shopping_cart) { ShoppingCart.create }

    it "returns all empty carts" do
      # fails because let is never executed
      ShoppingCart.empty_carts.should have(1).item

On a sidenote, some people prefer not to use let at all, anywhere. The idea is that let is often used as a way to “DRY up” specs, but what is actually going on is that we’re trying to hide the complexity of the test setup, which is a code smell.

Incidental database state

Magic numbers and incidental state can sneak in specs that deal with database records.

describe Task do
  describe "#complete" do

   before do
     @task = create(:task)

    it "completes the task" do

      # Creating another completed task above while extending the test suite
      # would make this test fail.
      Task.completed.count.should eq(1)

    # Take 2: aim to measure in relative terms:
    it "completes the task" do
      expect { @task.complete }.to change(Task.completed, :count).by(1)

Rails Testing Antipatterns: Fixtures and Factories

In the upcoming series of posts, we’ll explore some common antipatterns in writing tests for Rails applications. The presented opinions come from our experience in building web applications with Rails (we’ve been doing it since 2007) and is biased towards using RSpec and Cucumber. Developers working with other technologies will probably benefit from reading as well.

Antipattern zero: no tests at all

If your app has at least some tests, congratulations: you’re among the better developers out there. If you think that writing tests is hard — it is, but you just need a little more practice. I recommend reading the RSpec book if you haven’t yet. If you don’t know how to add more tests to a large system you inherited, I recommend going through Working Effectively with Legacy Code. If you have no one else to talk to about testing in your company, there are many great people to meet at events such as CITCON.

If you recognize some of the practices discussed here in your own code, don’t worry. The methodology is evolving and many of us have “been there and done that”. And finally, this is all just advice. If you disagree, feel free to share your thoughts in the comment section below. Now, onwards with the code.

Fixtures and factories

Using fixtures

Fixtures are Rails’ default way to prepare and reuse test data. Do not use fixtures.

Let’s take a look at a simple fixture:

# users.yml
  first_name: Marko
  last_name: Anastasov
  phone: 555-123-6788

You can use it in a test like this:

describe User do
  describe "#full_name" do
    it "is composed of first and last name" do
      user = users(:marko)
      user.full_name.should eql("Marko Anastasov")

There are a few problems with this test code:

  • It is not clear where the user came from and how it is set up.
  • We are testing against a “magic value” — implying something was defined in some code, somewhere else.

In practice these shortcomings are addressed by comment essays:

describe Dashboard do

  fixtures :all

  describe "#show" do
    before do
      # User with preferences to view posts about kittens
      # and in the group with special access to Burmese cats
      # with 4 friends that like ridgeback dogs.
      @user = users(:kitten_fan)

Maintaining fixtures of more complex records can be tedious. I recall working on an app where there was a record with dozens of attributes. Whenever a column would be added or changed in the schema, all fixtures needed to be changed by hand. Of course I only recalled this after a few test failures.

A common solution is to use factories. If you recall from the common design patterns, factories are responsible for creating whatever you need to create, in this case records. Factory Girl is a good choice.

Factories let you maintain simple definitions in a single place, but manage all data related to the current test in the test itself when you need to. For example:

FactoryGirl.define do
  factory :user do
    first_name "Marko"
    last_name  "Anastasov"
    phone "555-123-6788"

Now your test can set the related attributes before checking for the expected outcome:

describe User do
  describe "#full_name" do
    before do
      @user = build(:user, :first_name => "Johnny", :last_name => "Bravo")

    it "is composed of first and last name" do
      @user.full_name.should eql("Johnny Bravo")

A good factory library will let you not just create records, but easily generate unsaved model instances, stubbed models, attribute hashes, define types of records and more — all from a single definition source. Factory Girl’s getting started guide has more examples.

Factories pulling too many dependencies

Factories let you specify associations, which get automatically created. For example, this is how we say that creating a new Comment should automatically create a Post that it belongs to:

FactoryGirl.define do
  factory :comment do
    body "groundbreaking insight"

Ask yourself if creating or instantiating that post in every call to the Comment factory is really necessary. It might be if your tests require a record that was saved in the database, and you have a validation on Comment#post_id. But that may not be the case with all associations.

In a large system, calling one factory may silently create many associated records, which accumulates to make the whole test suite slow (more on that later). As a guideline, always try to create the smallest amount of data needed to make the test work.

Factories that contain unnecessary data

A spec is effectively a specification of behavior. That is how we look at it when we open one. Similarly, we look at factories as definitions of data necessary for a model to function.

In the first factory example above, including phone in User factory was not necessary, if there is not a validation of presence. If the data is not critical, just remove it.

Factories depending on database records

Adding a hard dependency on specific database records in factory definitions leads to build failures in CI environment. Consider the following example:

factory :active_schedule do 
  start_date - 1.month 
  end_date 1.month.since( 
  processing_status 'processed' 
  schedule_duration ScheduleDuration.find_by_name('Custom') 

It is important to know that the code for factories is executed when the Rails test environment loads. This may not be a problem locally because the test database had been created and some kind of seed structure applied some time in the past. In the CI environment however the builds starts from a blank database, so the first Rake task it runs will fail. To reproduce and identify such issue locally, you can do db:drop, followed by db:setup.

One way to fix this is to use factory_girl’s traits:

factory :schedule_duration do
  name "Test Duration"

  trait :custom do
    name "Custom"

factory :active_schedule do
  association :schedule_duration, :custom

Another is to defer the initialization in a callback. This however adds an implicit requirement that test code defines the associated record before the parent:

factory :active_schedule do
  before(:create) do |schedule|
    schedule.schedule_duration = ScheduleDuration.find_by_name('Custom')

Got anything to add? I’d love to hear your comments below.

Update 2014-01-22: Changed fixtures examples to use key-based lookup.

December platform update with Ruby 2.1.0 final, Grunt, Leiningen & more

The upcoming platform update, scheduled for December 30th, brings a couple of interesting changes.

Ruby 2.1.0 final was released on Christmas, and now coming to your builds on Semaphore as well.

Firefox is going up to version 26 (is anyone counting those with browsers any more?). An upgrade of selenium-webdriver gem may be necessary if you’re using Cucumber.

To make it faster to build Node.js and web-only projects, we’re making Grunt, the JavaScript task runner, and Bower, a package manager for web, available preinstalled.

Leiningen, Clojure’s Swiss Army knife becomes available as well.

As usual, full details are in the changelog.

Have a wonderful New Year!

Upgrade your paranoia with Brakeman

Brakeman is a wonderful tool that can scan your Ruby on Rails application for common vulnerabilities. It’s a static code analyzer that can help you find security problems before you release the application.

Since you can introduce a vulnerability at any stage of you development process, it’s obvious that a tool should scan your application as often as possible, preferably on every commit. That makes it an ideal candidate for the continuous delivery pipeline.

Fortunately, it’s easy to make Brakeman part of your CI.

Install Brakeman by adding it to your Gemfile:

group :development do
  gem 'brakeman', :require => false

And add the following command to your build setup on Semaphore:

bundle exec brakeman -z

The -z flag tells Brakeman to fail if it finds any vulnerabilities. This means that your build will fail before any issues reach the production, letting you know you should fix the problem.

We definitely recommend that you introduce security checks to your continuous delivery pipeline, and Brakeman is a good choice for Rails apps.

Capistrano 3 Upgrade Guide

We recently started receiving support requests about Capistrano 3. Of course to provide quality support you have to know the subject, so I set on a quest to upgrade Semaphore’s deployment script from Capistrano 2 to Capistrano 3. As always it took a bit than expected but in the end new code looks nicer.

I have to say that I did have a flashback from couple of years ago when I was setting up Capistrano for the first time: documentation is missing some things and it’s a bit scattered between the readme, wiki and official homepage. But it’s open source so we are all welcome to contribute improvements.

I will try to make the upgrade easier for you by presenting our old vs new Capistrano configuration step by step.

Bootstrap new configuration


As the first step you have to install new gems. Capistrano 2 didn’t have support for multistage configurations so you had to also use capistrano-ext gem. Capistrano 3 has multistage setup included. It is framework agnostic so you would have to use capistrano-rails gem which adds support for deploying Rails applications. Just update your Gemfile like in the example below, run bundle install and you are ready to start the upgrade process.

Capistrano 2:

group :development do
  gem "capistrano"
  gem "capistrano-ext"

Capistrano 3:

group :development do
  gem "capistrano-rails"

Capify project with Capistrano 3

As official upgrade guide advises, it’s a good idea to just move old Capistrano files to a safe place and than manually move configuration to newly generated files. Here is a tip how to do that:

mkdir old_cap
mv Capfile old_cap
mv config/deploy.rb old_cap
mv config/deploy/ old_cap

After that you are ready to capify your project with new Capistrano:

bundle exec cap install


Among other newly generated files you should also have the new Capfile. Below is how our Capfile used to look like and then the new one.

Capistrano 2:

load "deploy"
load "deploy/assets"
Dir["vendor/gems/*/recipes/*.rb","vendor/plugins/*/recipes/*.rb"].each { |plugin| load(plugin) }
load "config/deploy"

Capistrano 3:

require "capistrano/setup"
require "capistrano/deploy"

require "capistrano/bundler"
require "capistrano/rails/assets"
require "capistrano/rails/migrations"
require "whenever/capistrano"

Dir.glob("lib/capistrano/tasks/*.cap").each { |r| import r }

Your new Capfile will also contain two commented lines for rvm and rbenv support. We don’t use any of those tools for managing Ruby versions on our servers so I can’t say much much about that part.

require "capistrano/rvm"
require "capistrano/rbenv"

Multistage configuration

Configuration for stages really hasn’t changed much as you can see below. However there is one thing that you need to pay special attention to. The way of telling Capistrano to deploy specific revision has been changed. If you are using continuous deployment with Capistrano you have probably seen this line:

bundle exec cap -S revision=$REVISION production deploy

REVISION is an environment variable that Semaphore exports during deployment and Capistrano 2 was using it as a parameter. In Capistrano 3 this is gone and you have to take care of that by setting the branch variable to revision or branch that you want to deploy. We already had in our configuration ability to specify branch through environment variable:

set :branch, ENV["BRANCH_NAME"] || "master"

so we just had to prepend ENV["REVISION"] to that chain.

set :branch, ENV["REVISION"] || ENV["BRANCH_NAME"] || "master"

This is one of the things that is not documented and you either have to dig it up in source or ask somewhere. All in all the change should be pretty straightforward.

File below is config/deploy/production.rb.

Capistrano 2:

server "", :app, :web, :db, :primary => true, :jobs => true
server "", :app, :web, :jobs => true

set :branch, ENV["BRANCH_NAME"] || "master"

Capistrano 3:

set :stage, :production

server "", user: "deploy_user", roles: %w{web app db}
server "", user: "deploy_user", roles: %w{web app}

set :branch, ENV["REVISION"] || ENV["BRANCH_NAME"] || "master"

Main configuration - config/deploy.rb

Biggest changes you will have to make will be in this file. I will list the changes that you would need to pay attention to.

  1. You no longer need to require capistrano/ext/multistage or bundler/capistrano. Multistage is included by default and bundler support is included in Capfile.
  2. No need to specify available stages or default_stage.
  3. Variable name for setting repository url has changed from repository to repo_url.
  4. deploy_via :remote_cache is not needed any more. There have been large changes under the hood in a way how Capistrano handles repositories. It now creates a local mirror of the repository on your server.
  5. PTY option is on by default.
  6. ssh_options have changed slightly I think, but basic settings are pretty much the same.
  7. Capistrano will now take care of all symlinks that you need. Just tell it to go through linked_files and linked_dirs.
  8. In case that you are not using rvm or rbenv you will need to override rake and rails commands. (See Capistrano 3 deploy.rb file)

Writing custom tasks has changed significantly and you will have to dig deeper into documentation to write tasks that you need. The library responsible for this is SSHKit. It seems like a quite nice library.

Pro-tip: In Capistrano 2 you could just write var_name and get the value. In new version you always need to write fetch(:var_name). It took me some time to figure this out while I was re-writing a custom task that we use to manage our workers.

Capistrano 2:

require "capistrano/ext/multistage" #1
require "bundler/capistrano"

set :application, "webapp"
set :stages, %w(production staging)
set :default_stage, "staging" #2

set :scm, :git
set :repository,  "" #3
set :deploy_to, "/home/deploy_user/webapp"
set :deploy_via, :remote_cache #4

default_run_options[:pty] = true #5
set :user, "deploy_user"
set :use_sudo, false

ssh_options[:forward_agent] = true #6
ssh_options[:port] = 3456

set :keep_releases, 20

namespace :deploy do

  desc "Restart application"
  task :restart, :roles => :app, :except => { :no_release => true } do
    run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}"

  desc "Prepare our symlinks" #7
  task :post_symlink, :roles => :app, :except => { :no_release => true } do
    ["config/database.yml", "config/config.yml"].each do |path|
      run "ln -fs #{shared_path}/#{path} #{release_path}/#{path}"


after  "deploy",                   "deploy:post_symlink"
after  "deploy:restart",           "deploy:cleanup"
before "deploy:assets:precompile", "deploy:post_symlink"

Capistrano 3:

set :application, "webapp"

set :scm, :git
set :repo_url,  ""
set :deploy_to, "/home/deploy_user/webapp"

set :ssh_options, {
  forward_agent: true,
  port: 3456

set :log_level, :info

set :linked_files, %w{config/database.yml config/config.yml}
set :linked_dirs, %w{bin log tmp vendor/bundle public/system}

SSHKit.config.command_map[:rake]  = "bundle exec rake" #8
SSHKit.config.command_map[:rails] = "bundle exec rails"

set :keep_releases, 20

namespace :deploy do

  desc "Restart application"
  task :restart do
    on roles(:app), in: :sequence, wait: 5 do
      execute :touch, release_path.join("tmp/restart.txt")

  after :finishing, "deploy:cleanup"



The code that you get in the end is cleaner and Capistrano 3 together with SSHKit seems like a powerful combo. However some libraries like whenever and bugsnag don’t have Capistrano 3 support yet, so for now you will have to take care of that part on your own.

Custom Configuration Files

If testing and deploying your project requires some specific configuration files, you can now manage them directly through our new Custom Files feature. This feature allows you to securely create, edit or delete files that are not part of your repository.

For your new custom file you need to specify a target file path. For instance, if you have a project “semaphore_builder” and you want to add a new custom file to your project’s directory, your file path should look like the one on the screenshot below:

For greater security, the content of your custom files can be encrypted. We strongly recommend that you select this option if you are adding sensitive content such as SSH keys. Once encrypted, encrypted file cannot be edited. The identity of your file can determined by an MD5 hash.

Here are some ideas how you could use this feature:

  • Saving one (or more) additional SSH keys used to check out a private project dependency.
  • Creating a database.yml with non-standard attributes.
  • Storing a custom /etc/hosts file for subdomain configuration.

How to use different Gemfiles with Bundler

Normally when you’re working with a Ruby project with Bundler you write a file called Gemfile, run bundle install and then invoke various commands with the bundle exec prefix. However what if you would like to be able to work with different versions of gems over a longer period of time? In that case being able to use multiple Gemfiles within a single branch can help.

Bundler supports passing an environment variable called BUNDLE_GEMFILE to all of its commands. Here is an example how we can tell it to use a file called Gemfile-rails4:

BUNDLE_GEMFILE=Gemfile-rails4 bundle install --path vendor/bundle-rails4

You can then run tests in the similar way:

BUNDLE_GEMFILE=Gemfile-rails4 bundle exec rake spec

On Semaphore, I recommend creating a new project with the same repo and using build commands tailored for the custom Gemfile.

Get future posts like this one in your inbox.

Follow us on