Have a look at our new Handbook: "Transitioning from Monolith to Microservices"!  Discover →

    21 Dec 2022 · Software Engineering

    Measuring Page Speed with Lighthouse and CI/CD

    9 min read

    Page speed matters more than you think. According to research by Google, the probability of users staying on your site plummets as loading speed slows down. A site that loads in 10 seconds increases the bounce rate by a whopping 123%. In other words, speed equals revenue.

    Graph showing bounce rates for page load times. The values shown are 1 to 3 seconds = 32% bounce rate, 1 to 5 seconds = 90%, 1 to 6 seconds 106%, and 1 to 10 seconds 123%.
    Source: find out how you stack up to new industry benchmarks for mobile page speed.

    How can we ensure that our pages are loading at top speed? The answer is to measure them regularly with Lighthouse and CI/CD.

    Measuring page speed with Lighthouse

    Lighthouse is a page speed benchmark tool created by Google. It runs a battery of tests against your website and produces a report with detailed advice to improve performance.

    The Lighthouse HTML report. Includes scores for Performance, Accessibility, PWA, and Best practices. It also shows a sequence of how the webpage is rendered over time.
    Lighthouse running inside Google Chrome.

    ⚠️ You might be surprised at the low scores Lighthouse presents. This is because the tool simulates mid-tier mobile devices on a 4G connection.

    While Lighthouse’s primary focus is performance, it can assess other things like:

    • Accessibility: suggests opportunities to make the content more accessible. This covers ensuring that the page is optimized for screen readers, all elements have labels, or the site is browsable with the keyboard.
    • Best practices: checks for various sane practices that improve speed and security.
    • SEO: performs various checks to ensure that the page is SEO-optimized.
    • PWA: ensures the page passes progressive web application tests, which improves user experience on mobile devices.

    4 ways of running Lighthouse

    Lighthouse is an open-source project that you can run in different ways:

    1. Since it is included in Google Chrome, you can run it directly from the browser. Click on More tools > Developer Tools and open the Lighthouse tab.
    2. If you have Node installed, you can run npm install -g lighthouse and run the tool in the command line like this: lighthouse https://semaphoreci.com
    3. You can include it in your code as a Node package.
    4. And finally, Lighthouse has a CI version you can run in your continuous integration. We’ll use this method to schedule periodical benchmarks.

    Setting up Lighthouse CI

    In order to run Lighthouse CI, you’re going to need the following:

    • Google Chrome.
    • A GitHub or Bitbucket account.
    • The LTS version of Node.

    Let’s get started by installing Lighthouse CI with: npm install -g @lhci/cli

    Next, create a config file called lighthouserc.js. The URL parameter is an array of websites to benchmark. We can adjust the numberOfRuns, as more runs yield more statistically-relevant results.

    // lighthouserc.js
    module.exports = {
      ci: {
        collect: {
            url: ['https://semaphoreci.com'],
            numberOfRuns: 1
        upload: {
          target: 'filesystem',
          outputDir: 'reports'

    Bear in mind that, while Lighthouse is not particularly heavy on resources, setting the number of runs to a high value can make the job take too long to complete. Never forget that you want your CI pipeline to run in than 5 to 10 minutes, not more than that.

    Now we can run Lighthouse with: lhci autorun. When finished, we’ll find a few files in the reports folder:

    • manifest.json: contains a list of the reports generated.
    • $url-$timestamp-report.html is the main report you can open with any browser.
    • $url-$timestamp-report.json: the same report in JSON version.

    Using Lighthouse for non-functional testing

    Lighthouse lets us configure limits for each category. When one of these values falls below a predefined value, the tool stops the CI pipeline.

    For example, let’s say we don’t want the ratings of our site to fall below 80% in all categories. To achieve that, we need to add assertions to the config file:

    // lighthouserc.js
    module.exports = {
      ci: {
        collect: {
            url: ['https://semaphoreci.com', 'https://semaphoreci.com/blog'],
            numberOfRuns: 5
        upload: {
          target: 'filesystem',
          outputDir: 'reports'
        // enforce limits
        assert: {
          assertions: {
            'categories:performance': ['error', {minScore: 0.8}],
            'categories:accessibility': ['error', {minScore: 0.8}],
            'categories:seo': ['error', {minScore: 0.8}],
            'categories:best-practices': ['error', {minScore: 0.8}]

    Measuring page speed with CI/CD

    The next step is to automate Lighthouse with CI/CD. And for that, we’ll need to create a code repository and a Node project.

    1. Create and clone a repository on GitHub or Bitbucket.
    2. Initialize a Node project with npm init
    3. Add Lighthouse CI as a dependency: npm add @lhci/cli
    4. Add a start script in package.json:
    "scripts": {
    	"start": "lhci autorun",

    Do a trial run with: npm start

    When done, you should have the following files in your project:

    ├── lighthouserc.js
    ├── node_modules
    ├── package-lock.json
    ├── package.json
    ├── reports
        ├── manifest.json
        ├── semaphoreci_com-_-2022_10_26_12_58_56.report.html
        ├── semaphoreci_com-_-2022_10_26_12_58_56.report.json

    If everything checks out, commit your files to the remote repository.

    Continuous benchmarking with CI/CD

    Now it’s time to run the project with Semaphore.

    We’re going to schedule Lighthouse benchmarks with Semaphore. If you’ve never used Semaphore before, check the starting guide first.

    Let’s create a CI pipeline that runs Lighthouse CI:

    1. Add your GitHub or Bitbucket repository to Semaphore.
    2. Select “Single Job” as a starting workflow and click on Customize.
    Click on Single job and Customize
    1. Type the following commands into the first block:
    sem-version node 18
    cache restore
    npm install
    cache store
    npm start
    1. Open the Epilogue section, which is always executed even when the job fails, and type the following commands to save the reports in permanent storage.
    artifact push workflow reports
    for f in reports/*.html; do artifact push project "$f"; done
    1. Start the workflow with Run workflow.

    What did we get for all this work? We now have a pipeline that scans the website and throws an error if it fails to make the cut.

    You can check all the reports in the project’s Artifacts tab.

    The artifacts tab on Semaphore shows the contents of the reports folder, which contains all the HTML reports generated by all the runs.
    The HTML report can be found in the project’s Artifacts tab.

    Scheduling and setting a retention policy

    Semaphore will, by default, run the pipeline on every commit to the repository. This works great when you’re using Lighthouse during development. But we need more if we want to monitor a live page.

    We need a scheduler:

    1. Open the Scheduler tab in your Semaphore project and Click New Scheduler.
    2. Define how often you want to run the checks.

    The following example runs a check every day at 12 PM:

    An example of the Semaphore scheduler window where you can set up a schedule using crontab syntax.
    Setting up a daily schedule to scan the website.

    The final step is to configure a retention policy to delete old reports:

    1. Go to the Project Settings tab.
    2. Click on Artifacts
    3. Set retention policies for each file type in the reports folder.

    The following example sets a 6-month retention for HTML reports and 1-month retention for the rest of the files.

    Setting up a retention policy for reports.

    As a final touch, you can set up Slack notifications to get pinged when tests fail.

    How to analyze the results

    Our setup works, but I see a couple of problems with it:

    If these are problems for you, let’s explore some ideas to improve the testing.

    Setting up a dashboard

    The Lighthouse CI project includes an optional dashboard that lets you browse historical data and find trends.

    The Lighthouse Server Dashboard. It shows a graph of the page score over time and entries for the last 5 runs.
    Lighthouse CI dashboard.

    Installing the dashboard requires a separate server and database. You’ll need a dedicated machine and persistent storage to save historical data.

    The downside of this approach is obvious — you need to manage yet another server. But it may be worth doing if you have a lot of sites to analyze.

    Previewing test results on Semaphore

    If installing a dedicated server is overkill, you can still get a better experience and more insights with Semaphore’s test reports. This feature allows you to visualize all tests at a glance.

    The test tab in Semaphore shows all test results in a single sortable and filterable page.
    Viewing page speed tests historical data with the test tab in Semaphore.

    This approach has the added benefit of letting you write your own tests. That means you can do all kinds of fine-grained validations. For instance, you can fail a test if the page doesn’t use HTTPS, uses deprecated Web APIs, or the project doesn’t include a PWA manifest.

    To make this work, however, you’ll need to do a little bit of work:

    1. Open lighthouserc.js and remove all the assertions (or set them to warn instead of error) so the scanning job never fails.
    2. Install your favorite testing framework.. It must support the JUnit XML reports, so I recommend Jest with Jest-Junit.
    3. Write some tests, ensuring they produce JUnit reports. You can parse manifest.json and the JSON reports and make assertions depending on your needs.
    4. Add an audit action in package.json:
    "scripts": {
    	"start": "lhci autorun",
    	"audit": "jest --ci"
    1. Push the changes to the repository.
    1. Add a second job in your pipeline to audit the results.
    sem-version node 18
    artifact pull workflow reports
    cache restore
    npm run audit
    1. Add the publish and collect commands as described in the test results documentation.

    You can check a working example at my GitHub repository:

    Now you can preview how many tests failed on the Activity tab:

    The main project view in Semaphore shows how many tests failed in a build without having to open the workflow or see the job logs.
    You can see how many tests failed at a glance.

    And you’ll find a detailed test summary report for each run in the Test tab:

    The test tab on Semaphore shows various tests that have failed.
    You can sort the test results and get insights without having to read the output of the test jobs.

    Running Lighthouse in web development

    We’ve only focused on testing an existing and running website. But a setup like this can perform non-functional tests during web development. You can, for instance, fail commits that make the site perform worse or break accessibility rules.

    A CI/CD pipeline with three jobs. From left to right: download dependencies, build, run lighthouse. Two continuous deployment pipelines follow to deploy on Netlify and an S3 bucket.
    Lighthouse used as a quality and performance gate before deploying a static website.

    Time is money

    Users are drawn to fast and responsive websites. The problem is that measuring page speed reliably is challenging since you cannot assume that everyone is on a fast connection and uses a top-tier device. With Lighthouse in your CI/CD pipeline, you can get results closer to real-life conditions and insights to help you continually improve.

    Thanks for reading!

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Writen by:
    I picked up most of my skills during the years I worked at IBM. Was a DBA, developer, and cloud engineer for a time. After that, I went into freelancing, where I found the passion for writing. Now, I'm a full-time writer at Semaphore.
    Avatar for Tomas Fernandez
    Reviewed by:
    I picked up most of my soft/hardware troubleshooting skills in the US Army. A decade of Java development drove me to operations, scaling infrastructure to cope with the thundering herd. Engineering coach and CTO of Teleclinic.