NEW Run your Ruby tests in a few minutes and ship faster with Boosters, one-click auto parallelization · Learn more…

Semaphore Blog

News and updates from your friendly continuous integration and deployment service.

Build queue issues

Get future posts like this one in your inbox.

Follow us on

On Thursday, October 23rd after the rollout of a planned platform update at 13:30 UTC, Semaphore experienced issues which caused delays in running builds and deploys, coupled with decreased performance. First, we want to apologize for that. We know it messed up your workdays. That’s not how we want to do things and will do better next time. Second, I’d like to take a moment to explain what happenned.

Build errors after platform update

More than a few projects experienced unexpected build failures related to mysql2 gem. This was caused by the fact that in this platform update we migrated all projects to a new version of OS, Ubuntu 14.04 with different system libraries. Since Semaphore caches your project’s git repository and installed dependencies between builds, there have been cases where dependencies such as Ruby gems depending on system libraries could not work.

While we did our best to help everyone as soon as they raised the issue — on support, live chat or Twitter — it was also not the way we intended things to go. Our goal is that you don’t have to be aware of such details and not have unexpected failures which require an action on our end.

For this reason, we immediately shipped a small tool to clear a project’s dependency cache, now available in project settings. And at 17:02 UTC we cleared the cache for all projects. This resulted in new git clones and dependency installations for all, but without unexpected failures. We will be evaluating how to do this more granularly when a similar major update comes along next time.

Slow build queue

At the same time, our infrastructure was experiencing a larger issue where machines would go down at a very high rate and frequency. While we have a system to automatically detect this situation and reschedule any jobs that were running on a machine that’s affected, it couldn’t solve the problem completely because the failures were happening too fast, adding up more and more jobs to our build queue. We considered increasing our capacity but realized that it would not remedy the problem.

At 18:50 UTC we identified a memory leak in use of Java-related services, such as Solr and Cassandra, which was causing the failure of build servers. After some consideration we settled on the first guess that it was caused by platform update’s switch to Oracle JVM as the default JVM, and by 21:46 UTC we shipped a revert back to OpenJDK globally.

It took some more time to become evident that this was not a solution, and eventually we realized that the only change left was a memory limiter on LXC containers that run the builds, causing unexpected behaviour when certain Java processes hit it. We reverted this implementation of memory limiting at 22:10 UTC and all builds were able to start and finish normally. At 22:59 the build queue was clear (as we announced on Twitter) and new builds were starting at normal speed.

What we’ll do to avoid this in the future

While we do extensively test every platform update before the final release, it is not possible to recreate every possible scenario that comes from real world usage of the service. For this reason our current plan is to extend our infrastructure to securely run copies of a fraction of jobs with a new version of the platform.

Once again we would like to apologize to you for an interruption in your work. We know how Semaphore’s CI service is important to you and while this is our first situation worthy of a post-mortem in more than two years, we see failures as inevitable. It is our imperative however to shield you from them as much as we can.

BDD on Rails with Minitest, Part 1: Up and Running

BDD on Rails with Minitest tutorial

Chris Kottom, guest author on Semaphore blog

This is a guest blog post from Chris Kottom, a longtime Ruby and Rails developer, freelancer, proud dad, and maker of the best chili con carne to be found in Prague. He’s currently working day and night on the upcoming e-book The Minitest Cookbook.

Behavior-driven development (BDD) has gained mindshare within the Ruby and Rails communities in no small part because of a full-featured set of tools that enables development guided by tests at many different levels of the application. The early leaders in the field have been RSpec and Cucumber which have each attracted millions of users and dozens if not hundreds of contributors. But a growing number of Rubyists have started to build and use testing stacks based on Minitest which provides a comparable unit testing feature set in a simplified, slimmed-down package.

In this two-part series, I’ll show you how to implement a BDD workflow based on Minitest and hopefully introduce you to an alternative means for getting the same behavior-driven goodness into your application in the process. In this post, you’ll see how to set up your testing stack and run through a quick iteration to verify that everything is configured and working properly.

Setting Up

As an example, we’ll work on a simple Rails application that lists to-do items consisting of a name and a description initially and build that BDD-style using Minitest and other supporting tools. Before writing the first test, we need to install and configure our testing stack. In this case, we’re going with a combination of gems that will provide an experience that should be familiar to regular RSpec users:

To install the required gems, we need to add the following lines to the Gemfile.

group :test do
  gem 'minitest-rails-capybara'
  gem 'minitest-reporters'

Minitest supports two different methods for writing tests: a default assert-style syntax which resembles class Test::Unit and a spec-style syntax that more closely resembles RSpec. I personally prefer the spec-style syntax, so for this example, we’ll be writing all the tests as specs. To force the generators to produce spec-style test cases, we need to tell them to do so by adding the following block to config/application.rb:

# Use Minitest for generating new tests.
config.generators do |g|
  g.test_framework :minitest, spec: true

Next, we’ll update our test_helper.rb by running the minitest-rails generator: rails generate minitest:install. The new version of the test helper requires minitest-rails as a basis for all tests. We’ll modify that a bit further so that the final version also requires rails-minitest-capybara and minitest-reporters and configures the reporters. The finished product should look something like this:

ENV["RAILS_ENV"] = "test"
require File.expand_path("../../config/environment", __FILE__)
require "rails/test_help"
require "minitest/rails"
require "minitest/rails/capybara"

require "minitest/reporters"
class ActiveSupport::TestCase
  fixtures :all

A Failing Feature

The most basic feature of our to-do list will be a page that displays a list containing our to-do items, so we’ll start by generating a new Capybara feature using the provided generator: rails generate minitest:feature ToDoList. That initializes a boilerplate test in test/features that we can then update to suit our needs. We’ll begin by writing the most basic possible test for this application with minitest-rails-capybara providing a nice bridge between two worlds - packaging the Capybara DSL as Minitest-style assertions and expectations.

require "test_helper"

feature "To Do List" do
  scenario "displays a list of to-do items" do
    visit root_path

When I run my test suite, it bombs. Predictably.

To Do List Feature Test
  test_0001_displays a list of to-do items                       ERROR (0.00s)
NameError:         NameError: undefined local variable or method `root_path' for #<#<Class:0x007f83c41efee8>:0x007f83c4153930>
        test/features/to_do_list_test.rb:5:in `block (2 levels) in <top (required)>'
        test/features/to_do_list_test.rb:5:in `block (2 levels) in <top (required)>'

Finished in 0.00508s
1 tests, 0 assertions, 0 failures, 1 errors, 0 skips

This is exactly what we hope to see, of course. Our workflow dictates that we first establish the desired behavior, see that it fails, and then write the code to make it pass. We’re now ready to begin stepping our way through the implementation using the tests as a feedback mechanism.

The Red-Green Dance

The initial error occurs because the application has no routes defined yet, so we’ll need to add a root path pointing to a hypothetical ItemsController to config/routes.rb:

Rails.application.routes.draw do
  root to: 'items#index'

When we re-run the tests, the error that occurs has changed.

To Do List Feature Test
  test_0001_displays a list of to-do items                       ERROR (0.00s)
ActionController::RoutingError:         ActionController::RoutingError: uninitialized constant ItemsController
        test/features/to_do_list_test.rb:5:in `block (2 levels) in <top (required)>'
        test/features/to_do_list_test.rb:5:in `block (2 levels) in <top (required)>'

Finished in 0.00736s
1 tests, 0 assertions, 0 failures, 1 errors, 0 skips

The route we just defined expects an ItemsController with an index action. It would be easy enough to just create or generate this, but our workflow dictates that first we need a failing test describing what that controller action should look like. At the moment, we’re interested in the shortest path to display a view with an empty list of items. That produces a controller test like this:

require "test_helper"

describe "ItemsController" do
  describe "GET :index" do
    before do
      get :index

    it "renders items/index" do
      must_render_template "items/index"

    it "responds with success" do
      must_respond_with :success

Most of the heavy lifting for the simple checks you see here is provided by Rails’ ActionController::TestCase with the spec-style expectations provided as syntactic sugar by the minitest-rails gem. Minitest’s spec syntax resembles that of earlier versions of RSpec before recent changes to the way specs are defined and the introduction of the expect syntax. The resulting code is terse but expressive and reads well with no unnecessary noise.

Now when we run tests, we have errors in the feature and the two new controller tests.

To Do List Feature Test
  test_0001_displays a list of to-do items                       ERROR (0.00s)
ActionController::RoutingError:         ActionController::RoutingError: uninitialized constant ItemsController
        test/features/to_do_list_test.rb:5:in `block (2 levels) in <top (required)>'
        test/features/to_do_list_test.rb:5:in `block (2 levels) in <top (required)>'

ItemsController::GET :index
  test_0001_renders items/index                                  ERROR (0.00s)
NameError:         NameError: Unable to resolve controller for ItemsController::GET :index

  test_0002_responds with success                                ERROR (0.00s)
NameError:         NameError: Unable to resolve controller for ItemsController::GET :index

Finished in 0.01033s
3 tests, 0 assertions, 0 failures, 3 errors, 0 skips

Awesome, it’s practically raining errors! Since all of these are caused by the lack of an ItemsController, we can fix them by creating one. We’ll use the Rails generator - making sure not to overwrite the controller test we’ve already started working when prompted. Once that’s done, re-running the test yields the following output.

To Do List Feature Test
  test_0001_displays a list of to-do items                       FAIL (0.02s)
Minitest::Assertion:         expected to find #items.
        test/features/to_do_list_test.rb:6:in `block (2 levels) in <top (required)>'

ItemsController::GET :index
  test_0001_renders items/index                                  PASS (0.00s)
  test_0002_responds with success                                PASS (0.00s)

Finished in 0.03471s
3 tests, 3 assertions, 1 failures, 0 errors, 0 skips

The generated controller clears up both controller test errors for the time being, and the fact that there’s now a controller action and basic view has fixed the previous error and turned it into a failure. All that needs to happen now is to add an empty list of items to the view code by replacing the boilerplate view with:

<%= content_tag :div, class: 'items' do %>
<% end -%>

And when we run the tests again:

To Do List Feature Test
  test_0001_displays a list of to-do items                       PASS (0.02s)

ItemsController::GET :index
  test_0001_renders items/index                                  PASS (0.00s)
  test_0002_responds with success                                PASS (0.00s)

Finished in 0.03379s
3 tests, 3 assertions, 0 failures, 0 errors, 0 skips

All green! In only a few minutes, we’ve created a new application and driven our way to the first feature from start to finish using tests as a guide.

This gives you a taste of how to get started building a new Rails application using BDD and Minitest. In part two of the series, we’ll dig in deeper and show how to add more realistic functionality to the application while driving the whole process through our tests.

For more information about Minitest and the related gems used in this post, check out some of the following resources:

This tutorial continues in “BDD on Rails with Minitest, Part 2: Implementing a Feature”.

Upcoming Platform Update on October 23rd

The upcoming platform update is scheduled for Wednesday, October 23, 2014.

JRuby is updated to version 1.7.16.

Three PHP version updates are inluded too, namely 5.4.33, 5.5.17 and 5.6.1.

Git is updated to version 2.1.1.

Firefox is running version 33. Every project using the selenium-webdriver gem is required to update it with bundle update selenium-webdriver.

New things

We’ve also prepared quite a few additions to the platfrom based on your requests.

The list of additions is as follows:

You can try out this update right away by selecting version 1410 (release candidate) in the Platform section of Project Settings. As always, we’re eager to hear your feedback and thoughts.

A full list of changes is available in the platform changelog.

Continuous Integration for Bitbucket Projects on Semaphore

Semaphore launches continuous integration for Bitbucket projects

With great pleasure we are announcing that Semaphore is extending its hosted continuous integration and deployment service with support for Bitbucket repositories. Bitbucket is a source code hosting platform for the distributed version control systems (DVCS) Git and Mercurial, made by Atlassian.

As of today, Semaphore has everything you need to automatically test any project from a Git repository on Bitbucket. This includes setting up a full continuous delivery pipeline to any server or cloud provider, including Heroku and AWS.

The service is free for open source projects and up to 100 private builds per month.

Create a free account and start testing your projects today .

Adding your Bitbucket projects to Semaphore

Setting up continuous integration for a Bitbucket project and running your first build on Semaphore is easy and takes only a few clicks.

After you click to add a new project, select Bitbucket as your repository host. Next you will be redirected to Bitbucket to authorize Semaphore to access your projects. Keep in mind that Bitbucket only supports giving access to both private and public repositories. Once you confirm permission access, Semaphore will present you with a list of your Bitbucket repositories.

In the next steps you will select Bitbucket repository, branch to build and the Semaphore account that you want the repository to be attached to. Note that in order to add some repository from Bitbucket, user has to be either its creator or to belong to a Bitbucket group that has admin rights to the specific repository.

After that Semaphore will perform a quick analysis of the source code and generate a set of working build commands depending on your project’s programming language.

At this point you’ll be ready to launch your first build. Here’s what one of our latest looks like:

If you are using Bitbucket, we hope that you are excited about this feature as we are. Happy building!

Deploy from your chat room

Remember how excited you were when Jesse Newland from GitHub presented ChatOps at GitHub? Great news! From now on you can set up your own Hubot to perform deployments on Semaphore.

Several weeks ago Ben Straub from Gridium asked us about a way to trigger a deploy via Semaphore API. We exchanged several mails and as a result he sent us a Hubot script to run a deploy on Semaphore. On the screenshot below you can see it in action as one of the founders from Gridium deploys a project to production right from a Slack chat room.

Deploying to production with Semaphore from a Slack chat room

How to set up Hubot

Hubot — GitHub’s chat bot — is a friendly robot that lives in your chat room and helps you and your team with simple tasks.

It can be deployed on a wide variety of platforms, including Linux and Windows, but a popular option is to deploy it to Heroku. Detailed instructions about installation can be found in Hubot’s Readme file.

After deployment you need to invite the bot in your chat room. Hubot has simple integration with popular chat services such as Slack and Campfire, but it can also work on many other chat services.

Make your Hubot work with Semaphore

In order to extend your own Hubot with the deployment feature, please clone the hubot-semaphore-deploy repository and follow the tutorial in Hubot’s Readme on adding scripts.

After installing the script you will need to copy the API auth token from Semaphore and export it in your Hubot’s production environment. The auth token can be found in your project’s setting under the API tab.

export HUBOT_SEMAPHOREAPP_AUTH_TOKEN=<your-api-auth-token>

And that’s all there is to have deploy power right in your chat room. Having this feature in your company’s workflow can increase awareness of what everyone is working on, and also reduce the time newcomers need to get up to speed with the process of continuous integration and delivery.

Update: Ben Straub joined forces with the author of hubot-semaphoreapp and combined this project with Semaphore status updates.

A Look Inside Development at 500px

In the “Developer Interview” series we talk to developers from some of the companies using Semaphore to find out how they work and share their insights with you. This time we chatted with Devon Noel de Tilly, QA Engineer at 500px, a photo community for discovering, sharing, buying and selling inspiring photography powered by creative people worldwide.

Tell us about your role and responsibilities at 500px.

Devon Noel de Tilly, QA engineer at 500px

I’m a QA Engineer at 500px. I look after our CI setup which runs on Semaphore, maintain our suite of automated tests, run manual tests in a pre-production environment, manage deployment, find and fix bugs, and work closely with our developers and sysadmins. My role is something of a hybrid between testing, automation / DevOps and sometimes development. I work primarily on our web apps and supporting services.

Your products serve an established community of users. What does your workflow typically look like at this point, from an idea until it’s shipped?

A lot of what we do at 500px comes out of ideas from our developers and designers. We hold monthly hack days, and a lot of our new features have come from that, as well as just discussions amongst team members. Of course we get ideas from our users, what they want and what is important to them. And sometimes our executives will come to the team with what they’d like to see.

We hold monthly hack days, and a lot of our new features have come from that.

Usually after someone gets an idea, they’ll make a small proof of concept, often for hack day but sometimes they’ll just pitch it to anyone who’ll be involved in making it a reality. If people like the idea and are on-board for it, then the developers or designers will start working on it. Our technical staff is separated into teams at 500px, so we have a web team, a platform team, an iOS team, etc. that are all largely self-contained, but collaborate closely when necessary. Usually a project will be scoped to one team, with maybe a bit of support from some of the other teams, so planning is generally pretty minimal.

After a team has decided they’d like to do something, and our project managers have split the work up into tickets and slotted it in for a sprint, it’s largely up to the individual teams how they want to proceed. For example, our platform team likes to do things in small bite-sized tickets and work either in pairs or alone, hammering out a new feature a little bit at a time, while our web team prefers to get together in a big room with their laptops and all work on a new feature together.

500px team

Once the initial design and development is done for a ticket, the developer will open a pull request, and it’ll go through code review, where other designers and developers on the team will point out flaws and make suggestions for improvement. After code review, it’ll enter QA, where we’ll run the full suite of automated tests through Semaphore against the branch, as well as do some manual and load tests in a pre-production environment. If there are failed specs, or if we find bugs through manual testing, QA will gather any relevant logs, errors and steps to reproduce, sometimes make suggestions, reject the ticket, and developers will work to fix it. After a ticket has been accepted by QA, we’ll merge it into the master branch and deploy the code to users.

Did you have times when technical debt slowed down this process? If so, how did or do you overcome it?

Yeah, technical debt I think is a problem for any code base once it’s become sufficiently large. We try to be proactive about it as much as possible, but there have been times when it’s come back to bite us a bit, or when its slowed down our development cycle for sure.

It’s not so much a problem we’ve overcome as a problem we’re always overcoming. It’s something we have to be constantly mindful of. How we deal with it is again largely on a team by team, or even individual by individual basis. We use Code Climate to analyze our code base, and that helps us find some things we’ve overlooked. Sometimes members of the team who find themselves with time will spend it combing over the code base and addressing some of our technical debt.

For me, when I started at 500px, a big part of the technical debt we had at the time was the state of our spec suite, which wasn’t really being maintained and was a bit of a mess.

500px team

What was your main guideline in getting to a green build?

I wouldn’t say I really had one main guideline, unless you count the vastness of the internet as a whole. I drew from a variety of resources. But I approached the problem knowing where our weaknesses were (mostly race conditions and inefficiently loading test resources), and keeping best practices in mind, found our pain points and corrected them a little at a time.

I used Relishapp’s Rspec documentation and Betterspecs as a guideline for best practices, as well as a variety of great tech blogs like Airbnb’s blog and Semaphore’s own blog.

There’s an idea that if you care about something, and you want to work on it, then work on it and its yours.

Given the amount of tests in our suite and the amount of race conditions that needed addressing, I also realized early on that I’d have to make a couple compromises. I found a great little gem called respec that we use to rerun failed specs exactly once at the end of our test build, and that helped us a lot in the beginning.

Do you release new features to all users immediately?

Often but not always. Small new features and bug fixes we generally push to all users right away, but some of our bigger features we’ll protect behind rollout flags and rollout internal first, so we can all get acquainted with it and find any problems or things that we might’ve overlooked. If something has the potential to really hammer our API, and we’re not sure how it’ll preform, we’ll sometimes do a staged rollout to percentages of our users also. And we also do A/B tests when we’re unsure how users will respond to a new feature or layout.

500px team

How do you manage the code for your main application on Is it pretty much a classic MVC web app, or you’ve branched out into something more custom?

At one point our main site was basically one huge Rails app (which we call The Monolith internally), but as our platform got bigger it became harder and harder to maintain. So we’ve started splitting everything up into a microservices architecture. The idea is that any logically distinct part of code should be self-contained.

Ruby, and also Rails, are designed primarily to be easy to use and pleasant for the programmer, that’s a big philosophy in the Ruby and Rails community. That things should Just Work. Which is awesome, but once you start getting into big computation on the backend, it can be pretty slow and inefficient. So we started splitting out some of our really slow code, our pain points, into their own Go services, which is much more performant. Things like search and photo conversion.

What’s your favorite part in working at 500px?

There are a lot of things I like about working at 500px. The opportunities to work on things that interest me, and to grow my skills and knowledge. The great benefits and team outings. The people I work with, they really make it a great place to work. But if I had to pick something, it would probably be the attitude towards ownership of the product. The autonomy I’m given is great, I don’t feel like I have people looking over my shoulder and telling me what to do all the time.

At 500px, there’s an idea that if you care about something, and you want to work on it, then work on it and its yours. That we own the things we’re working on, that we’re ultimately responsible for the things we decide are important. Of course there are disagreements sometimes about what we should be working on, of course there’s pressure from other team members to focus on something, but that’s the ideal we’re working towards. And I feel like I can really get behind that message.

Testing Clojure With Expectations

Clojure is shipped with clojure.test — a library with basic features for writing tests. However, there are a few alternatives that aim to make writing tests more pleasant or more suitable for BDD. Expectations by Jay Fields is one of them, described as “a minimalist’s unit testing framework” with the slogan “adding signal, removing noise”.

Expectations setup

The easiest way to get started is to add Expectations and lein-expectations to your project.clj:

(defproject expectations-playground "0.1.0-SNAPSHOT"
  :description "Playground for exploring Expectations"
  :license {:name "Eclipse Public License"
            :url ""}
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [expectations "2.0.9"]]
  :plugins [[lein-expectations "0.0.8"]])

After that, you can run tests with lein expectations.

Expectations also integrates well with several editors and development environments, such as Emacs and IntelliJ. More information about installation and setup can be found in the library documentation.

Adding signal, removing noise

A simple example that compares clojure.test and Expectations already reveals few interesting details:

; clojure.test
(deftest equality-test
  (testing "Is 'foo' equal 'fooer'"
    (is (= "foo" "fooer"))))

; expectations
(expect "foo" "fooer")

Running the test with clojure.test gives:

While running the test with Expectations gives:

We already see Expectations delivering on the slogan “adding signal, removing noise”. The test is shorter, but it gives us more informations in a nicer way. Output is colored by default, format is a bit easier on eyes and it’s more precise about the failure — it says what’s different about strings. It also shows how much time the test took to run, which can be useful when optimizing tests.

Minimal and consistent syntax

It’s interesting to see how minimal and consistent the Expectations syntax is. At first glance, there is not much besides expect. It can get you a long way:

(expect 2 (+ 1 1))

(expect "foo" (str "f" "o" "o"))

(expect [1 2 3] (conj [1 2] 3))

(expect {:a 1 :b 2} (assoc {:a 1} :b 2))

(expect #"Expect" "Expectations")

(expect empty? [])

(expect 3 (in [1 2 3]))

Even more signal

Expectations really shines when it comes to testing collections as it tests not only equality, but also contents of collections:

Even less noise

Expectations tries aggressively to remove noise from test output. For example, it trims long stack traces, leaving only the important part.

Example stack trace using clojure.test:

This is the equivalent stack trace using Expectations:


Expectations certainly delivers on promise “more signal, less noise”. But the library also provides a few additional tricks that are not covered here — like testing side effects and freezing time. For more informations, visit the Expectations website.

Introducing Upcoming Platform Update, Selectable Platform Versions and Ubuntu 14.04 Support

The upcoming platform update is scheduled for Wednesday, September 24th, 2014 and is our first release to be based on Ubuntu 14.04 LTS.

With this update we’re introducing release candidate platforms which can be selected in project settings. From now on when we announce an update, you can immediately try it out.

Choosing between platforms in project settings.

We hope that this will ease the transition between updates. Keep in mind that these platforms are in a pre-release state and some things may not work as expected until the final release day. The current platform will be still available after the update so you can always fallback to it if needed.

We encourage you to send us your feedback and suggestions so that the final platform releases are in a sparkling condition and ready for you projects.

Platform version naming and update schedule

This month we’re beginning a more structured process of platform update management. Each version will be named according to a simple YYMM convention, for example this month’s version is 1409. We will release an update every month. If there are incremental or bugfix releases in the ongoing month, that version will have a minor increment, eg 1409.1.

Each month’s release is considered a major version. When a new major version is released, we will always keep the previous version available in project settings until the next major version release.

Projects using the latest major version will always be automatically upgraded to the new major version as soon at the time of its final release. Projects using an older major version will be upgraded to the latest one the moment their old platform version is dropped.

Ubuntu 12.04 end of life

To help all our users avoid any problems, this month we are releasing platform updates based on both Ubuntu 12.04 and 14.04. There will be no further platform updates based on Ubuntu 12.04; Semaphore platform version 1409 will be the last one. It will remain available until January 1st, 2015.

This month’s updates (v1409)

Firefox goes up to version 32, requiring bundle update selenium-webdriver for most projects that use that gem.

ElasticSearch is updated to version 1.3.2.

Ruby gets two updates with 1.9.2p330 and 1.9.3p547.

JRuby is also updated and now it’s running on version 1.7.15.

Bundler is now on version 1.7.2.

PHP updates include versions 5.3.29, 5.4.32, 5.5.16 and the addition of 5.6.0. For more information check out the changelog.

Git has been updated to version 2.1.

NodeJS receives two updates with 0.8.28 and 0.10.31

Updates for the final release

Ruby 2.1.3 has been added to the supported languages. Check out the Ruby changelog.

Bundler is now on version 1.7.3.

A full list of changes is available in the platform changelog

Semaphore stickers are here

At last they are here!

Semaphore continuous integration stickers

We have been very laborious and created Semaphore stickers in various sizes so they fit any occasion. After consulting with the stylists on the hottest trends for spring 2015 we present you our inspirational concept below.

To obtain stickers visit this page. We ship all around the world!

Continuos integration stickers for all occastions

Working With A Safety Net: A Developer Interview With Procore

In the “Developer Interview” series we talk to developers from some of the companies using Semaphore to find out how they work and share their insights with you. Here we had with us Michael Siegfried, Senior Engineer at Procore, one of the fastest growing companies in USA.

Tell us a little bit about your company and what it does.

Procore is a cloud-based construction software company. We aim to help construction companies work more quickly and efficiently. There are a lot of one-off tools for managing blueprints or contracts out there, but we give companies the ability to access all of their construction information in one place. A lot of our customers were keeping track of huge Excel files, so having all of their information available on any computer is huge for them!

“For any non-trivial feature we typically go through a research process that includes getting on calls with current customers”

What does your workflow typically look like, from an idea until it’s shipped?

For any non-trivial feature we typically go through a research process that includes getting on calls with current customers to suss out how it should behave. We have at least one member from the product, development, and quality assurance teams on these calls. That ensures that requirements don’t get passed down through hearsay and that we arrive at the best solution for all three departments.

If it’s a large project, we continue to work very closely with customers through a beta release to iteratively get closer and closer to the ideal feature. As development works on distinct parts of the feature, these branches will go through a quality assurance iteration before they are code reviewed and released.

What tools and guidelines do you use to write tests?

The two main resources for test-writing that we use are Thoughtbot’s guide to testing Rails applications and Better Specs. We write our specs in RSpec.

How do you approach refactoring, as the system grows?

One of the last refactorings I worked on was a large method called all_contract_buttons, which was, as the name implies, a method for gathering all of the buttons related to the contract. The method was 72 lines long and just had a bunch of duplication and irrelevant logic.

“Sometimes the act of refactoring costs more than the benefits of clean code.”

The way I approached the situation was to begin writing unit tests for the current implementation. Once I had this safety net, I began working on splitting up the logic and creating helper methods. The unit test became more of an integration test, but it enabled me to take this 72 line method and reduce it to 20 lines. The code was much easier to read and reason about. This approach gives you confidence that you are not breaking current functionality as you refactor.

What’s the last big realization about programming that you had?

One of the last big realizations about programming I had is that refactoring doesn’t necessarily justify its own cost. Sometimes the act of refactoring costs more than the benefits of clean code. It certainly makes sense to refactor something if you are touching that part of the code base, but sometimes there are bigger fish to fry.

How does Semaphore help you achieve your goals?

Semaphore is integral to our development process. We never deploy code to production unless the build is green in Semaphore. We also don’t merge branches into our master branch unless they are green.

A handful of times Semaphore has helped us find bugs in code that we were about to ship, but more often it helps us find bugs before we even consider the code ready to ship. Semaphore finds potential bugs before they become user-facing issues and gives us peace of mind that we are not breaking any tested code. While we have ways to go to get better spec coverage, it will only make Semaphore an even more important part of our process!

Get future posts like this one in your inbox.

Follow us on