Semaphore is one of the very few (if not the only) CI services running jobs on dedicated hardware. The choice to have machines in data centers proved to be very beneficial, as this is what gives our users the best performance in class. We’re happy to announce that the cluster running the standard platform has been upgraded with new, faster hardware.
This means that our users get even better performance, regardless which plan they’re subscribed to (including the free tier). Let’s compare some numbers, and see how we got here.
Seconds and minutes quickly accumulate over time, and even if you’re running just hundreds of builds per month, the time savings can be measured in hours.
The benchmarking was done with sysbench, and the results are as follows:
|memory||3975.98 MB/sec||4565.33 MB/sec||14.7%|
|file io||4.8153 Gb/sec||5.051 Gb/sec||4.9%|
|mysql||328363 (5472.57/s)||359521 (5991.16/s)||9.4%|
You can run the benchmarks locally as well, by installing sysbench, and executing the appropriate commands:
# Installation (Ubuntu) $ sudo apt-get install -y sysbench # CPU $ sysbench --test=cpu --cpu-max-prime=20000 --num-threads=1 run # Memory $ sysbench --test=memory --num-threads=1 run # File IO $ sysbench --test=fileio --file-total-size=1G prepare $ sysbench --test=fileio --file-total-size=1G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run # MySQL $ sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=test --mysql-user= --mysql-password= prepare $ sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=test --mysql-user= --mysql-password= --max-time=60 --oltp-read-only=on --max-requests=0 --num-threads=8 run
The tests are mostly self-explanatory, but File IO deserves some explanation, as it might vastly differ from your local run, or from other CI services. This high number is the result of Semaphore runnining its jobs in a RAMDisk, giving excellent performance, especially with the current DDR4 memory sticks combined with an Intel i7 7700 (Kaby Lake) CPU.
How we got here
In the humble beginnings, circa 2012, the replaced machines ran the first builds on Semaphore. There were just a dozen of them, and they became our pets. As the need for capacity grew, their numbers grew as well. Our pets turned into cattle at this point, but some of them, still remained by our side.
Having special snowflakes in infrastructure is of course an anti-pattern, and thanks to following infrastructure-as-code (IoC) best practices, we haven’t had any software nor configuration differences between machines. IoC also gave us a very good foundation to carry out the migration.
How we did it
The migration was carried out in 3 days time, in a Blue-Green deployment fashion. The workload was gradually diverted to the new cluster, and from the users’ perspective, this was completely seamless. A good analogy would be replacing a moving car on the highway with a faster one, without the driver noticing.
As with every growth, this sprouted some beneficial changes to our workflow as well. Working at scale gives a whole new perspective on things, and encourages you to rethink core processes, and to apply some automation where you never thought to. The result of this is that instead of the “couple of machines per hour” speed, we can now rapidly bring in an arbitrary number of new hardware to our production infrastructure. The process of managing these machines was refreshed as well, and we replaced a lot of manual work with automation.
Naturally, this speed boost in synthetic tests can’t be mapped to each and every project configuration in the real world, but depending on the composition, size and the job type of your test suite, you should see an improvement between 5% and 15%.
Thanks for your attention, and as always, happy building! To all the old and new CPU cycles! ?
At Semaphore, we’re on a mission to make continuous integration fast and easy. If you’re new to Semaphore, learn more about our hosted continuous integration and delivery service.