Channable

Semaphore halves time from push to deployment for Channable’s heavy-duty builds

  • ⛔️️ 30 minute Haskell builds
  • ⛔️ Underpowered build agents
  • ⛔️ Lost parallelisation speedup
  • ⛔️ Productivity lost to context switching
  • ⛔️ Waiting for git clones
  • βœ… 10 minute Haskell builds ⬇️ 3x
  • βœ… Powerful build agents
  • βœ… More parallel compilation
  • βœ… Reduced workflow disruption
  • βœ… Local git cache

Discover what Semaphore can do for your team πŸ§‘β€πŸ’»

The Challenge

The Channable team’s workflow only permits merges to the master branch once the CI system has checked the merge commit. If another merge to master happens in the meantime, the results of other merge commit builds is invalidated. “Long wait times for CI are frustrating,” explains Robert Kreuzer, co-founder and CTO, “if it takes too long you go and do something else in the meantime, and when you remember to check your build, somebody else merged to master, so you need to start over.” In addition, long build times lead to more context switching, so for the sake of productivity the team was keen to speed up their CI/CD pipeline.

Channable was experiencing a couple of specific issues with their existing CI solution, Travis. First, wait times for jobs to start were long and there was no visibility on what was causing the delay. Secondly, Channable’s pipeline includes compiling Haskell code to native binaries, which is CPU intensive. The team found that builds on Travis machines were a lot slower than their own workstations.

The excessive waiting and pipeline execution times were brought sharply into focus in the context of production issues. From pushing a hotfix to deployment could easily take 20 minutes, which, as Robert highlights, “is a very long time for a critical bug to be in production.”

“For CPU-heavy pipelines, it’s definitely the right tool for the job.”

Overall, Channable needed their new CI/CD solution to

  • βœ”οΈ Decrease wait times for queued builds
  • βœ”οΈ Faster build machines for CPU-intensive compilation
  • βœ”οΈ Fast, local caching for bandwidth-intensive git operations

β€œThe Semaphore cache was even better than expected, very fast and easy to use”

The Solution

The Channable team surveyed a range of solutions including AppVeyor, Azure DevOps, BitBucket Pipelines, CircleCI, Cirrus, CodeShip, Concourse, GitLab CI, SourceHut builds. Availability of multi-core build machines and the flexible and easy-to-use caching stood out as differentiators in Semaphore’s favour.

“The biggest impact for us was the availability of bigger multi-core VMs,” says Robert. “Especially the Haskell compilation and our test suite could benefit a lot from the added parallelism.” Semaphore offers three flavours of Linux machine. Each job in a pipeline runs in a separate build-agent. For Channable’s CPU intensive Haskell compilation step, they chose the most powerful, 8-core variant to maximise speed.

Further improvements over the old solution came via caching. The first type of caching that boosted performance is Semaphore’s local Git cache. This cache reduced time spent cloning git repos considerably. The second type of caching used in the workflow sped up Channable’s pipeline and reduced download traffic. Semaphore provides a cache for each and every project. The cache is available in every pipeline job. The Channable workflow stores infrequently changing dependencies after the first successful build of each day. Caching is available in every job out-of-the-box, so configuring the build this way was easy. “The Semaphore cache was even better than expected,” concludes Robert, “very fast and easy to use”.

Channable Pipeline

The Results

Semaphore queues jobs to run in a pool of up to 50 build machines per organization, a ten-fold increase over Travis’ most powerful advertised plan. With more agents to run jobs, and more powerful agents in use for the biggest jobs, the new CI/CD solution paid off as anticipated. As Robert summarises, “The overall gain in performance compared to Travis had the expected result: We roughly cut the time from push to result in half.” A lot of the saving came from the high-performance build agent used for the CPU intensive Haskell compilation. Time for a full rebuild of the native binaries (with all dependencies in the cache) went from 30 min under Travis to about 10 min with Semaphore.

Overall, the team got the lead-time and productivity improvements they were looking for. “Time to ship (from merging a PR to the package being deployed) went down due to shorter wait times,” relates the CTO, “making deployments less disruptive to the overall workflow.”

β€œLow queueing time, fast builds, a neat caching solution.”

Industry

eCommerce

Company Size

100-200 employees

Engineering team

35 people

Stack

Python

Haskell

Nix

JavaScript

More customer stories

β€œIt’s way faster and we just pay Semaphore for the number of build seconds we actually use. As we’ve spent a bit of time optimizing our build speed, each build costs about 10 cents.”

CTO / Co-Founder

Read case study

β€œSemaphore allows us to build anywhere from two to twenty branches in parallel, each one in a threaded environment. We had thousands of tests that were running for a really long time and with Semaphore we got the build to run in under 20 minutes.”

QA Lead at 500px

Read case study

β€œWe were using Jenkins to build the old system, and it was hard to maintain and quite painful. We looked at other hosted CI services, but they were very unreliable. When we found Semaphore, we got our application up and running pretty quickly. We now use it for all projects.”

CTO at Simply Business

Read case study