17 Mar 2022 · Software Engineering

    Do You Need to Test Everything?

    6 min read
    Contents

    At my previous job at a big multinational company, we didn’t have anything remotely resembling an automated test. What manual testing we did consisted of dragging and dropping a folder and mashing F5. When we skipped testing, however, the users ended up doing it for us. After they did their round of testing, they would call or show up at our office, wondering angrily why the product was not working as they expected.

    I start with this story because I don’t want you to think that I do not value tests. I do. Very much. I believe we can’t live without them. But you can have too much of a good thing. However vital water may be, too much of it results in a flood.

    You get a test, you get a test, everyboooody gets a test!

    Everybody gets a test…

    Tests are living documentation, messages to other developers (and our future selves) about what the code is supposed to do. Tests enforce requirements and provide feedback, telling us when something breaks.

    A testament to their power is that at least two test-centric philosophies have materialized: Test-Driven Development (TDD) and Behavior-Driven Development (BDD). Both prescribe that we should always write the test before the code. Only then are we allowed to work on implementation.

    The TDD Cycle

    The startling thing about both paradigms is that they’re not about eliminating bugs (although they help reduce them). They’re about design. Writing the test first forces us to think from the user’s point of view (even if that user is only us at the beginning). The fact that we end up with an executable test is mostly just a nice side effect.

    Testing’s dark side

    Photo by R.D. Smith

    There are two ways of thinking about this: testing in general and testing first.

    When faced with a practice widely advertised as “one size fits all” or “the one true way” like TDD or BDD, I think a bit of skepticism is only healthy. Every framework has its limitations. So what tradeoffs are we making by putting so much emphasis on testing?

    • Investment: an ever-growing test suite has maintenance costs. If the time invested in writing a test nets a positive return, then you’ve added value. But there are no guarantees that the ROI won’t be negative at times.
    • Less readability: testable code is often less readable. Abstractions like inversion of control, dependency injection, mocking, or stubbing are common and make for hard-to-read code.
    • No silver bullet: at times, code makes us jump through hoops to write a test. For instance, a controller in an MVC application needs extensive mocked requests and responses, even to test a little bit of logic.
    • Prototyping: until you have a working prototype, testing is more of a burden than a boon. Testing starts to make sense once the system matures and interfaces stabilize.
    • False positives: tests are also code and they can be buggy, causing false positives and making you hunt for nonexistent bugs.
    • Green mania: I call this the compulsion to reach 100% coverage. While seeing a full green bar is satisfying, no amount of tests can prove that a system is error-free. Testing everything can generate the illusion of a perfect codebase, but it doesn’t mean that there are no problems.
    • Speed: while there are methods to make tests run very fast, they do still take time and resources to run. So you should make each one count.
    • Cargo cult: some developers feel shame if they don’t write tests for everything.

    To those who prescribe “test everything, always,” I ask: is the design better because of the test or because I stopped to think before coding?

    The goal is to write code that works

    We should be open to admitting that tests may not be necessary for every single line of code. While it takes experience to know how to write tests, it takes even more to know when and where they should be written.

    So, instead of development being test-driven, wouldn’t it be better if it was thought-driven? We should always be thinking about what we’re doing before putting our hands on the keyboard. Here are some things that you should keep in mind while writing code and trying to decide what you want to test:

    • Do I have a fast test suite? Does your CI/CD pipeline run in under 10 minutes? If not, allocate some effort to streamlining it before extending the suite.
    • Am I following a specification? Is the test covering business requirements? Do you have a specification defined? If so, you must write enough tests to ensure you’re following it.
    • How critical is my code? Not every piece of code is equally important; some may not even deserve a test. Consider what’s the worst that can happen if your code has errors.
    • Is my code interesting? Is the code under test complex enough to warrant a test? Avoid testing trivial code, and make sure you’re not testing the compiler.
    • Am I heavily refactoring only for the sake of testing? How difficult is it to implement a test? In other words, is the test more complex than the actual code tested? If you need to add several levels of ad hoc code, maybe the cure is worse than the disease.
    • Do I control all elements in the test environment? Do you fully own the system? External dependencies lead to flakiness. If you depend on third parties, maybe you need some form of contract testing instead.
    • Do I need excessive setup? Do you need to set up a ton of things to test? It could mean that your code is too tightly coupled, or you’re trying to do unit testing where integration or end-to-end tests would be a better fit.

    As you can see, exactly when to write a test is a question filled with nuances. When there are no clear answers, perhaps going back to basics helps:

    I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don’t typically make a kind of mistake (like setting the wrong variables in a constructor), I don’t test for it.”

    Kent Beck at StackOverflow Creator of Extreme Programming and leading proponent of TDD.

    Final thoughts

    Kent’s quote should resonate every time a developer sits down to write code. The key factor here is self-awareness of your own level of proficiency. A rookie developer will need to write more tests than a veteran because they’re more likely to make more basic errors. With experience, a stronger sense of where you need to test emerges.

    Testing is a tool. Applying this tool means knowing when the benefits outweigh the costs. Some people argue that everything should be tested, including static views. Others draw the line at presentational elements. Use a thought-driven approach to better decide where your line is. Use a thought-driven approach to better decide when tests should be written.

    Thanks for reading.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Avatar
    Writen by:
    I picked up most of my skills during the years I worked at IBM. Was a DBA, developer, and cloud engineer for a time. After that, I went into freelancing, where I found the passion for writing. Now, I'm a full-time writer at Semaphore.
    Avatar
    Reviewed by:
    I picked up most of my soft/hardware troubleshooting skills in the US Army. A decade of Java development drove me to operations, scaling infrastructure to cope with the thundering herd. Engineering coach and CTO of Teleclinic.