Back to the list
holly cummins
Episode 65 · Jul 12, 2022· 21:40

Holly Cummins on Getting Into and Testing Microservices

Featuring Holly Cummins, Senior Principal Software Engineer on the Red Hat Quarkus team
Apple Podcasts Google Podcasts Spotify Youtube

Microservices architecture is one of the most popular options for companies going through application modernization or migrating to the cloud. However, companies may end up abandoning them altogether as the hype comes to an end.

In fact, for Senior Principal Software Engineer Holly Cummins, microservices can be a bitter drink for migrating companies that didn’t question whether or not they were making the right decision. Holly will educate us on the challenges microservices come with and share the right way to test them.

Holly Cummins is a senior principal software engineer part of the Quarkus team at Red Hat. Before that, she worked at IBM for 20 years, spending the last five years with the company as a cloud consultant at IBM Garage. Her role there was to help clients with cloud migration.

During this time, Holly guided clients to successful cloud migration stories. Moreover, Holly also helped to put back on track companies that were facing issues during their migration processes.

Cloud migration and microservices

Holly has mixed opinions on companies deciding to move forward with the implementation of cloud and application modernization technologies; especially, when it comes to microservices. The microservices architecture consists of breaking down an application into smaller ones, each with the functionality of a single feature, hence creating multiple independent single-function applications that communicate through their own APIs.

Due to its scalability, compatibility, and decentralized governance, the microservice architecture is a common solution for application modernization.

Perhaps for this very reason, Holly has often run into companies choosing to implement microservices yet without them knowing if that’s their best move. Although organizations might regard microservices as something that would inherently benefit them, it isn’t exactly the case.

Your customers are never going to look at your website and go, ‘Look at all the beautiful microservices‘”, she says.

Microservices won’t necessarily grant companies what they are looking for, since their issues may not be rooted on the microservices’ end. As such, to Holly, some companies think:

“We’re going to microservices because we want to be able to respond to customer requirements faster, but our release board only meets every six months; Well then, you know that your release cadence is going to be once every six months.”

On the other hand, to Holly, monolithic applications are not necessarily legacy applications. She defines legacy code as the code that halts what you want to do: “the legacy is partly about the age, but it’s also about, ‘Does it stop you doing what you want to do?’ And if it stops you from doing what you want to do, then you should be looking to change it.”

In this way, she sharply points out that “in our industry’s legacy code, sometimes it’s the code that we wrote last week”. 

In turn, Holly thinks of legacy systems that “even if it’s old, even if it’s grody, even if it’s unfashionable; if it’s not actually stopping you from doing what you want to do, then maybe you should leave it alone and focus on things less satisfying but more productive.

Hence, she believes that the parts of legacy systems that work as intended shouldn’t be changed due to the cost and low reward of the process. Instead of modernizing the system altogether, she prefers opting for changing the parts of the system that affect operations and modernizing them while leaving them connected to the rest of the system.

Lastly, to Holly’s experience, microservices can create problems far different from what you will expect in a monolithic application. Even more so, when it comes to testing.

Testing microservices: the testing pyramid

Holly recalls companies having a hard time figuring out microservices independent deployment and the complexity of testing them. Unlike monolithic applications, where you test a single application, when working with microservices you need to test each of them.

Microservice routing and testing add a level of complexity unheard of in monolithic applications. However, to Holly, microservices are worth it, even though the burdens it comes with.

Holly has seen organizations set their pipeline to deploy all microservices at the same time for ensuring testing coverage. This approach, however, has a downside. To her, by doing this you lose microservices’ independent deployment, one of their selling points, and lose type and API safety.

Likewise, by doing this you also lose guaranteed execution —which isn’t a problem for monolithic applications— yet without gaining something from microservices, like agility or decoupling. 

The testing strategy is known as the test pyramid

Holly recommends three types of testing for microservices:

  • Unit test. Testing only parts of the microservice. Located at the base of the pyramid, unit tests are quick but depend on your assumptions about the dependencies, so in case your assumptions are wrong, it will give green light to deployment, but fail in production.
  • Integration test. Testing how microservices act together. Integration tests are slow and, in comparison with unit tests, need more work to be set up. To Holly, integration testing is necessary and shouldn’t be left out, yet they can be detrimental in large numbers, so it is best to use contract tests as a middle layer in the test pyramid.
  • Contract test. Testing microservices communication and data consistency with an established document or contract. Contract testing, like unit testing, has a low build time.

Holly also encourages using feature flags automation tools like LaunchDarkly to test microservices’ interoperability during production.

Lastly, there is the possibility of defect escape. Holly defines an escape as “a defect that was not found by or that escaped from the test team. Implementing the escape analysis method for test improvement can increase the quality of software by lessening the occurrence of software defects.” To deal with them, she recommends the escape analysis method.

Besides testing, Holly believes monorepo, the practice of keeping code from multiple projects in a single repository, has its perks but also downsides when it comes to microservices. She understands that while it has its appeal for code discoverability and support through an IDE,

“Once you get to a certain scale, they become challenging even for the IDE and then they become challenging at other levels as well, and so then you need to start to invest in lots of code to make sure that you’re doing your incremental builds and that you’re checking out just the right bit because maybe you don’t need the whole bit.

-Holly Cummins

Hence, she believes that monorepo is a viable option for microservices; still, only for the smallest and largest companies: The former, because their code isn’t large, the latter, because they can incorporate a specific platform team to maintain the repository in shape.

Test maintenance: Slow feedback loops and dealing with flaky tests

Holly encourages getting rid of tests that don’t add value, that is, those who never fail or do so too often. In those cases, to decide when to retire tests, she encourages starting by reducing their periodicity. By slowing down feedback loops, tests continue to act as a safety net in case something breaks.

Withal, Holly believes that fixing the tests is better than just deleting them, even for code that stays the same for years, since in case the code changes, these tests may fail, which makes them still valuable.

Lastly, when it comes to flaky tests, Holly believes that the tooling for dealing correctly with flaky tests is still pending. Flaky tests are those that, without changes in the code, sometimes fail and others pass. When fixing these tests is counterproductive, these are quarantined to avoid messing up production times.

Holly understands that debugging flaky tests is still too cumbersome, and developers require more precise information to fix them:

“I’d love to see that tooling just being more automatic and more widespread and producing just a bit more of a report at the end to say, ‘Okay, so here are your test results. These are intermittent and the failure rate is 50%. These are intermittent and the failure rate is 5%’.”

-Holly Cummins

The bottom line

For more information on Holly Cummins’ projects and ideas, follow her on Twitter and visit her website to find her writing, talks, and interviews.

Meet the host

Darko Fabijan

Darko, co-founder of Semaphore, enjoys breaking new ground and exploring tools and ideas that improve developer lives. He enjoys finding the best technical solutions with his engineering team at Semaphore. In his spare time, you’ll find him cooking, hiking and gardening indoors.

twitter logolinkedin logo