28 Dec 2022 · Software Engineering

    Writing Clean and Efficient Table-Driven Unit Tests in Go

    8 min read
    Contents

    Over the past years, table-driven tests have become a popular choice for unit tests within the Go community. They facilitate testing a function with a variety of inputs: A table-driven test consists of multiple rows of given inputs and expected outputs. You can keep these tests clean and efficient by following a set of simple guidelines. By “efficient”, I mean testing as many inputs as possible with the least amount of overhead.

    What is a table-driven unit test?

    This is what a table-driven unit test should look like: A function sum(a, b int) int that takes two integers and returns their sum is tested with two test cases. A test case is represented by an anonymous struct consisting of two given input arguments and the expected result. Each test case is mapped against a unique name for that test case.

    func TestSum(t *testing.T) {
        tests := map[string]struct {
            a        int
            b        int
            expected int
        }{
            "10+5": {
                a:        10,
                b:        5,
                expected: 15,
            },
            "2+2": {
                a:        2,
                b:        2,
                expected: 4,
            },
        }
    
        for name, test := range tests {
            t.Run(name, func(t *testing.T) {
                actual := sum(test.a, test.b)
    
                if actual != test.expected {
                    t.Errorf("sums don't match: expected %v, got %v", test.expected, actual)
                }
            })
        }
    }

    After declaring the test cases via the tests map, the test function iterates over all the map entries, i.e. the test cases. The sum function is called in each iteration using the input arguments of the current test case. If the actual result doesn’t match the result expected by the test case, the test fails with an error message.

    The example above is relatively simple, but there are some interesting details to it. In fact, this test function already applies a number of suggestions from this article.

    Maps instead of slices

    Sometimes, test cases in a table-driven setup contain a name field to uniquely identify and describe the test case. These test cases are then stored in a slice.

    tests := []struct{
        name     string
        a        int
        expected int
    }

    While this approach works well from a technical point of view, storing the test cases inside a map is preferable. When a lot of complex test cases start occupying too much screen space, it’s convenient to collapse individual test cases within the IDE. When using a plain slice, the test name will disappear, but remain for the maps. Also, besides being easier to navigate, a map clearly separates the test name from its fixtures.

    tests := map[string]struct {
        a        int
        expected int
    }


    Another nice side effect of using a map is an undefined iteration order. Test cases might or might not be executed in the same order by the Go runtime, exposing faulty test setups where tests only pass when executed in a certain order. This can happen when there’s accidentally some state involved.

    Comparing values

    The example at the beginning only tests and compares primitive values using a builtin operator. When dealing with more complex comparisons, especially with comprehensive custom structs, this approach usually is too cumbersome. A library like Google’s go-cmp provides a convenient way for checking such values for equality.

    Not only does go-cmp compare the values, it also returns the diff between the expected and the actual value as a string. Performing a comparison is simple – if the diff string is not empty, the values are different.

    if diff := cmp.Diff(15, 20); diff != "" {
        t.Errorf("sums don't match: %v", diff)
    }

    Because the two provided arguments aren’t equal, the library reports a diff where - indicates the expected value, and + indicates the actual value.

    sums don't match:
    -: 15
    +: 20

    Subtests

    Instead of running the test cases directly inside the for loop, our example uses the Run method for executing the test case as a so-called subtest.

    These subtests change the behavior of the test function: Even if one or more of the test cases fail, all the other test cases will still be executed. A complete list of failing test cases (or a list of failing and passing test cases when the -v flag is used) will be printed when running a test function using subtests. In contrast, executing a failing test case directly inside the loop would cause the entire test function to exit, omitting the remaining test cases.

    --- FAIL: TestSum (0.00s)
        --- FAIL: TestSum/8+8 (0.00s)
            sum_test.go: sums don't match:
                -: 16
                +: 20
        --- FAIL: TestSum/20+20 (0.00s)
            sum_test.go: sums don't match:
                -: 40
                +: 50

    There is no need to print the test name in the error message when a test fails, because the test name has already been passed to Run. The aforementioned list of test cases will include the respective test names.

    Another benefit of subtests is the ability to only run a particular test case. The following command, for example, exclusively runs the 10+5 subtest:

    go test -run="TestSum/10+5"

    Being able to run a specified subtest facilitates debugging that particular test and saves execution time.

    Logging and failing

    When an actual value doesn’t match an expected value in a comparison, there are two options. Either an error message is logged using Errorf or the test is terminated using Fatalf.

    The first option only logs the error and marks the test case as failed, but proceeds with the test function and its consequent checks. The latter option cancels the entire test case immediately upon failure. It is good practice to use these two options considerately. By default, when a check fails, it is advisable to only log the failure using Errorf and continue with the remaining checks. Only if the checks are a precondition for subsequent checks and proceeding wouldn’t make any sense, Fatalf should be used to fail immediately.

    if len(actual) != len(test.expected) {
        t.Fatalf("lengths don't match: expected %v, got %v", len(test.expected), len(actual))
    }
    
    for i, value := range test.expected {
        if actual[i] != value {
            t.Errorf("values at index %d don't match: expected %v, got %v", i, value, actual[i])
        }
    }

    The test above aims to compare the contents of two slices, test.expected and actual. It does so by comparing the values in a loop. Accessing the indexes is only safe if the lengths of the slices are the same, so the test immediately fails with Fatalf if this precondition isn’t met.

    If the lengths are the same, the values within the slices are compared index by index. An error is logged using Errorf if the elements don’t match, which will mark the test as failed but continue with the comparisons within the loop.

    Helper structs

    A test case can become quite large for functions with many parameters or many dependencies that need to be set up. To keep these complex test cases clear and manageable, it might make sense to outsource the input parameters and the expected results into their own helper structs.

    func TestSum(t *testing.T) {
        type Given struct {
            a int
            b int
        }
    
        type Expected struct {
            sum int
        }
    
        tests := map[string]struct{
            given    Given
            expected Expected
        }{
            // Test cases ...
        }
    
        // Test logic ...
    }

    The given input parameters are stored in a dedicated Given struct, and the expected result is stored in an Expected struct. The test case is still an anonymous struct but only contains these two fields. Of course, this approach is better suited for test cases that have a higher complexity than the TestSum example.

    Note that both helper structs are defined inside the TestSum function. That way, a Given and an Expected type can be defined for every test function in the package. Declaring a third struct for storing mocked dependencies can be helpful, too.

    Splitting functions

    Table-driven tests are a good indicator for the complexity of a function. Test cases getting too convoluted is a sign that the function that needs to be tested either has too many dependencies or too many duties. If your function retrieves data from a third-party service, forwards this data to another service, and writes the data into a local database at the same time, chances are that the function is doing too many things at once. In this case, the function should be split into multiple smaller functions with clearly separated concerns.

    Splitting the function into smaller pieces will simplify the test cases for these functions, and it will lower the effort for adding new test cases. Both the function itself and the test cases will be easier to reason about.

    The bottom line

    Testing your functions using the table-driven approach enables the testing of a large number and variety of input arguments with comparably little overhead. However, table-driven tests don’t guarantee sound test setups and clean test functions. By following the guidelines demonstrated here, you will be able to keep your tests as clean and efficient as possible. For a full example, check out the table-driven-tests repository.


    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Avatar
    Writen by:
    I’m a Software Engineer with a focus on backend services and distributed systems, mostly writing Go and Java.
    Avatar
    Reviewed by:
    I picked up most of my soft/hardware troubleshooting skills in the US Army. A decade of Java development drove me to operations, scaling infrastructure to cope with the thundering herd. Engineering coach and CTO of Teleclinic.