Doubles elixir

Practical Guide to Test Doubles in Elixir

Draw a clear line between fakes, stubs, mocks, and spies in Elixir, and learn how to use them.

Speed up your Elixir tests and deployment on Semaphore.

Test and deploy faster

Introduction

In this tutorial, we'll be looking at one of the main use cases for test doubles in Elixir: aiding in the interaction between domain logic and impure components that exist over the boundaries of an application.

Tests relying on such components are usually brittle, and need to reach out to external resources whose state may be foreign and unpredictable, thereby causing slowness and sometimes making tests fail unexpectedly. Classic examples involve the network, the filesystem, and the database.

The Goals of This Tutorial

Our goal is to draw a clear line between fakes, stubs, mocks, and spies according to Martin Fowler's definition, and learn how they can be used in practice. We won't use any libraries, but we will refer to some options in case you want to dive deeper.

Prerequisites

It is assumed that you are somewhat familiar with Elixir and ExUnit, and want to learn how to use test doubles in Elixir.

The Example Application

In the previous tutorial, we developed a library to import a CSV file into an in-memory persistence layer we named repository. It can be summed up by the following function call:

Csv.Import.call("sites.csv", headers: [:name, :url], schema: Site)

From left to right, the above arguments are:

  • A path to the source CSV file,
  • The CSV headers of interest, and
  • A struct to hold a parsed CSV row.

Under the hood, this function loops over the CSV file and parses the specified columns (identified by headers) into structs, after which they get inserted into the persistence layer via parallel tasks. We'll refer to these structs as "records".

From now on, let's call this module Importer, and use it to drive out the upcoming examples.

Fakes

Fakes are working implementations of an interface, but they usually take some shortcut which makes them not suitable for production (an in-memory database is a good example).

Let's rephrase this definition: fakes are working implementations of an interface, and as such, they do not deliver canned responses.

Fakes can be stateful or non-stateful, which means they can be backed by a process or not.

In the following sections, we will explore an example of a fake used together with dependency injection and formalized with a behaviour. After that, we will hop on to an example of a fake with compile-time injection.

Fakes in Practice

Let's get back to our Importer. The "repository" collaborator is not crossing the boundaries of our application. Instead, it is reaching out to a process spawned by the following Agent:

defmodule Csv.Repo do
  @me __MODULE__

  def start_link do
    Agent.start_link(fn -> [] end, name: @me)
  end

  def insert(record) do
    Agent.update @me, fn(collection) -> collection ++ [record] end
    {:ok, record}
  end

  def all do
    Agent.get(@me, fn(collection) -> collection end)
  end
end

Using this Agent is rather simple. Start it up, insert a few records, and retrieve them back:

alias Csv.Repo

# Start it up
Repo.start_link

# Insert a few records
Repo.insert %Site{name: "Elixir Language", url: "https://elixir-lang.org"}
Repo.insert %Site{name: "Semaphore Blog", url: "https://semaphoreci.com/blog"}

# Retrieve them back
Repo.all
# [
#  %Site{name: "Elixir Language", url: "https://elixir-lang.org"},
#  %Site{name: "Semaphore Blog", url: "https://semaphoreci.com/blog"}
# ]

Even though the above repository has been conceived with the intent of satisfying its surrounding logic, clients may want to use an actual database in production - as opposed to cutting corners with a volatile in-memory process. Therefore, our repository can be seen a stateful fake. It may not feature a persistent storage, but it works perfectly well for testing purposes.

Here's how our fake can be used as a database:

test "imports records of a csv file" do
  # Start the fake
  Repo.start_link
  options = [schema: Site, headers: [:name, :url]]

  # Call the importer. It uses the fake under the hood.
  "test/fixtures/sites.csv" |> Csv.Import.call(options)

  # Assert the state of the Agent matches sites.csv
  assert [
    %Site{name: "Elixir Language", url: "https://elixir-lang.org"},
    %Site{name: "Semaphore Blog", url: "https://semaphoreci.com/blog"}
  ] = Repo.all
end

Dependency Injection

The rationale behind using a stateful fake is to avoid setting up a database before running our tests. Doing so would have steered us away from our module’s core functionality: orchestrating work between extracting, parsing, and inserting records.

However, there's a more important reason — decoupling from the persistence layer, which opens up the possibility to replace the repository with a database, a JSON writer, an XML writer, or whatever else we wish. It's possible to convert a CSV file into another format just by changing the repository.

The boundary for that is not open though; notice the static reference to "Csv.Repo" inside our Importer:

defmodule Csv.Import do
  alias Csv.RecordStream

  def call(input_path, schema: schema, headers: headers) do
    {:ok, device} = File.open(input_path)
    {:ok, stream} = RecordStream.new(device, headers: headers, schema: schema)

    # Csv.Repo is hardcoded
    stream
    |> Task.async_stream(Csv.Repo, :insert, [], max_concurrency: 10)
    |> Enum.to_list
  end
end

To enable clients to change the repository via dependency injection, we can pass a "repo" variable into Csv.Import.call. Let's do that in our test:

test "imports records of a csv file" do
  Repo.start_link

  # We are adding "repo: Csv.Repo" in here
  options = [schema: Site, headers: [:name, :url], repo: Csv.Repo]

  "test/fixtures/sites.csv" |> Csv.Import.call(options)

  assert [
    %Site{name: "Elixir Language", url: "https://elixir-lang.org"},
    %Site{name: "Semaphore Blog", url: "https://semaphoreci.com/blog"}
  ] = Repo.all
end

Finally, let's introduce a repo argument to the function signature, and then use that variable instead of Csv.Repo:

# First, introduce a "repo" argument
def call(input_path, schema: schema, headers: headers, repo: repo) do
  {:ok, device} = File.open(input_path)
  {:ok, stream} = RecordStream.new(device, headers: headers, schema: schema)

  # Second, replace Csv.Repo with "repo"
  stream
  |> Task.async_stream(repo, :insert, [], max_concurrency: 10)
  |> Enum.to_list
end

Dependency injection means passing a module around as a runtime dependency. It promotes a principle called dependency inversion: instead of making a function depend on a module, we make it depend on any module implementing a given interface. The main benefits are swapability of dependencies, which means "decoupling" -- as long as the established interface is honoured by the dependency.

Explicit Contracts via Behaviours

With a very simple change, the boundaries of our code became explicit and we've unlocked its ability to work with other repositories. Strictly speaking, we've improved the design as a side-effect.

To create new repositories, we can just implement the necessary functions. Let's spell them out with Elixir typespecs:

@spec insert(record :: struct) :: {:ok, struct} | {:error, struct}
def insert(record) do
  # Code here
end

The @spec directive declares that the insert function:

  • Receives a struct called record as its sole argument, and
  • Returns either {:ok, struct} or {:error, struct}.

Something can still be improved though: it's not clear by looking at the code what exactly needs to be implemented by a new repository. I can hear a developer asking "Do I need to implement the Repo.all function?" And the answer is No — the Repo.all function is a test-specific necessity. Helpfully, Elixir provides a feature called "Behaviours".

Behaviours specify a set of functions that have to be implemented by a module, and also make sure these functions are implemented by each module using the behaviour. We can declare a behaviour with the @callback directive, similarly to @spec but without a function body:

# A behaviour needs to be encompassed by a module
defmodule Csv.Repo do
  @callback insert(record :: struct) :: {:ok, struct} | {:error, struct}
end

The next step is to rename our fake to Csv.Repo.InMemory and declare that it uses our brand new behaviour:

defmodule Csv.Repo.InMemory do
  @behaviour Csv.Repo

  def insert(record) do
    # Code here
  end
end

The benefits are not over yet — Elixir emits a compile-time warning if a module adopting a behaviour does not implement the required functions. Take the following module:

defmodule Csv.Repo.Http do
  @behaviour Csv.Repo

  def insert(), do: nil
end

As you can see, the first argument to insert is missing. Therefore Elixir emits a warning:

warning: function insert/1 required by behaviour Csv.Repo is not implemented (in module Csv.HttpRepo)

To finalize this section, let's reflect on what we've done so far. Instead of crafting a behaviour from the get-go, we worked towards our goal incrementally. First, we made our code work with a hardcoded fake. That allowed us to figure out the interface and the means of interaction with that interface. We then applied dependency injection and then formalized the interface with a behaviour.

Compile-time Injection

Dependency injection is a very desirable feature to our library, because it allows clients to specify the repository right on the call-site. We can either let clients implement a database repository or ship an implementation that works on top of a library such as ecto.

But what if our importer was not a library, and we did not need such flexibility? It would probably make sense to limit our repository options to "in memory" for tests and "database" for other environments. In that case, resorting to compile-time injection seems reasonable, because it would avoid us passing the dependency around every time, thus simplifying the code.

First of all, we would need to configure a default repository module, which would be Csv.Repo.Database. Supposing our library is being used by an Elixir application, we could edit its main configuration file:

# config/config.exs
config :my_application, :csv_importer_repo, Csv.Repo.Database

After which we would override that setting for the test environment with Csv.Repo.InMemory:

# config/test.exs
config :my_application, :csv_importer_repo, Csv.Repo.InMemory

Now we can have a module attribute inside Csv.Import with a value equal to our new config option. Module attributes are set at compile-time:

defmodule Csv.Importer do
  @repo Application.get_env(:my_app, :csv_importer_repo)

  # Rest of the code here...
end

Finally, we can delete the repo option that was being passed via dependency injection, and instead use the @repo module attribute:

# There is no longer a "repo" option
def call(input_path, schema: schema, headers: headers) do
  {:ok, device} = File.open(input_path)
  {:ok, stream} = RecordStream.new(device, headers: headers, schema: schema)

  # Instead of the old "repo", we can use the "@repo" module attribute
  stream
  |> Task.async_stream(@repo, :insert, [], max_concurrency: 10)
  |> Enum.to_list
end

After removing dependency injection from the Importer tests, everything should work just as before!

Stubs

Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.

As opposed to providing concrete implementations, stubs are supposed to return canned answers. They can be crafted with plain old modules and functions and should be backed by behaviours whenever possible.

Since we've already covered the use of behaviours, we will go through a case where a stub can be particularly handy: for faking out a module that has an extensive and stable interface, but only a subset of that interface is needed.

Stubs in Practice

We want our Importer to have a new feature:

Write a file to disk with any rows that have failed validation and thus could not be saved, as well as the validation errors themselves.

Such a feature would allow users to download the CSV, fix the errors, and attempt a new import with just the failed records.

This task can be accomplished by a specialized module that we'll call Csv.Output. It will work by dumping any invalid records into a CSV file. We need to design this module so it can be plugged into our Importer's stream pipeline:

stream
|> Task.async_stream(Save, :call, [repo], max_concurrency: 10)
|> Stream.map(fn {:ok, save_result} -> save_result end) # Extracts result from the task
|> Stream.map(&Csv.Output.write(&1)) # Just like this line
|> Enum.to_list

In the above snippet, Csv.Output.write deals with import results coming throughout the stream via Save.call, one record at a time, and dumps a row into the output file if validation has failed.

In other words, if save_result comes out as {:error, changeset}, our function will log the corresponding row to the CSV file; if it is {:ok, changeset}, our function will ignore it altogether.

A feasible implementation would make use of the File and IO modules:

iex(1)> {:ok, handle} = File.open "records.csv", [:write]
{:ok, #PID<0.86.0>}
iex(2)> IO.binwrite handle, "name,url,errors\n"
:ok
iex(3)> IO.binwrite handle, "Elixir Language,,url can't be blank"
:ok
iex(4)> IO.binwrite handle, "Semaphore Blog,,url can't be blank"
:ok
iex(5)> File.close handle

Let's go over what we just did: the first line opens up records.csv for writing (hence the [:write] argument) and obtains a "handle" in return. Next, the handle is used to write the headers and a few rows out to the file, after which the file is finally closed.

Given the code we've just seen, we can perfectly well make Csv.Output work on top of File.open. Some additional work needs to happen though: dumping the CSV headers into the file. After that, we'll be ready to start inserting the actual rows.

Testing with Stubs

To avoid interacting with the filesystem, we'll use a test double - technically a "stub" - with a slice of the File module's interface. It will contain two functions: open and close. For now, let's start with the open function:

defmodule FakeFile do
  def open(device, [:write]), do: {:ok, device}
end

Let's take a close look at how we can use it:

iex(1)> {:ok, device} = StringIO.open("")
{:ok, #PID<0.105.0>}
iex(2)> {:ok, device} = FakeFile.open(device, [:write])
{:ok, #PID<0.105.0>}
iex(3)> IO.binwrite device, "name,url,errors\n"
:ok
iex(4)> {contents, _} = StringIO.contents device
{"", "name,url,errors\n"}
iex(5)> contents
"name,url,errors\n"

Here's how it works: instead of passing a file path to the open function, we are passing an in-memory IO device called StringIO, which works the same as a normal file handle obtained from File.open. Lastly, we are using the IO handle to write the headers in and then get the results with StringIO.contents.

We are taking advantage of the fact that Elixir is a dynamic language: in the actual code we'll pass a file path; but in the tests, we'll pass a StringIO device.

That works because our stub passes the input argument forward without executing any logic, while still having a function signature and return value both compatible with File.open. Our stub also plays a "mocking" role: we are pattern-matching against the [:write] argument, which is what we need File.open to be called with.

Back to our main subject, let's write a test for Csv.Output.open with the help of our stub friend. It will ensure the CSV headers get dumped to the file at output_path:

defmodule Csv.OutputTest do
  use ExUnit.Case

  defmodule FakeFile do
    def open(device, [:write]), do: {:ok, device}
  end

  test "creates an output csv with the given headers" do
    {:ok, device} = StringIO.open("")
    headers = ~w(name url)

    Csv.Output.open(device, headers, FakeFile)

    {_, file_contents} = StringIO.contents(device)
    assert file_contents === "name,url,errors\n"
  end
end

This test is based on the stubbing behavior that we've discussed above, adding that it hands FakeFile to our function via dependency injection along with the StringIO device. Although we are passing "name" and "url" as the input headers, we expect a third implicit header called "errors" to be present in the final output.

Finally, to make this test pass, our function is supposed to call FakeFile.open(device) to retrieve the same input device back -- which it will use to dump the headers in with IO.binwrite:

defmodule Csv.Output do
  def open(path, headers, file_mod \\ File) do
    {:ok, device} = file_mod.open(path, [:write])
    headers = (headers ++ ["errors"]) |> Enum.join(",")

    IO.binwrite(device, headers <> "\n")
  end
end

We've successfully made use of our stub via dependency injection! Notice that file_mod has a default value equal to File, which is the module that we've succeeded to fake out. Hence, File will be used when no file_mod is specified - most likely in a development or production environment.

To make the above test even more readable, we can use a few helpers to clarify the trick that's being executed:

defmodule FakeFile do
  def open(device, [:write]), do: {:ok, device}
end

def new_fake_path do
  {:ok, fake_path} = StringIO.open("")
  fake_path
end

def fake_file_contents(fake_path) do
  {_, file_contents} = StringIO.contents(fake_path)
  file_contents
end

test "creates an output csv with the given headers" do
  fake_path = new_fake_path()
  headers = ~w(name url)

  Csv.Output.open(fake_path, headers, FakeFile)

  assert fake_file_contents(fake_path) == "name,url,errors\n"
end

Testing Edge Cases

This is where stubs can help you win the day. So far, we've just tested the happy path. But what happens when the file can't be opened? To answer that question, let's check out the documentation for File.open. Here's the relevant part:

Returns {:ok, function_result} in case of success, {:error, reason} otherwise.

Given this information, what we need to do is clear — write a brand new stub to return {:error, reason} and unlock our ability to test an edge case that would otherwise be very difficult to reproduce:

defmodule BadFakeFile do
  def open("output.csv", [:write]), do: {:error, "failed to write csv"}
end

As opposed to our first stub, wherein we used the "fake path" trick, here we're pattern-matching against a literal "output.csv" string and skipping StringIO maneuvering because we won't need to collect any inputs:

defmodule BadFakeFile do
  def open("output.csv", [:write]), do: {:error, "failed to write csv"}
end

test "returns an error when the file can't be opened" do
  output = Csv.Output.open "output.csv", ~w(name url), BadFakeFile

  assert {:error, "failed to write csv"} = output
end

To make this test green, we'll use a case statement: if file_mod.open returns {:ok, handle}, we'll do exactly what we did before, and additionally return {:ok, handle}; otherwise, we'll return {:error, reason}:

defmodule Csv.Output do
  def open(path, headers, file_mod \\ File) do
    case file_mod.open(path, [:write]) do
      {:ok, handle} ->
        headers = headers ++ ["errors"] |> Enum.join(",")
        IO.binwrite(handle, headers <> "\n")
        {:ok, handle}

      {:error, reason} ->
        {:error, reason}
    end
  end
end

Imagine how difficult and error-prone it would be setting up a scenario to exercise these edge cases over the filesystem!

Mocks and Spies

Mocks are objects pre-programmed with expectations which form a specification of the calls they are expected to receive.

In other words, a mock is an expectation set by a test framework over a module before running the code under test. Let's see an example in Ruby:

it 'creates a client over the wire' do
  expect(HTTPClient).to receive(:post)
    .with(endpoint_url, body: { name: 'Thiago' })
    .once

  Product.create(client_name: 'Thiago')
end

In this code snippet, we expect Product.create to call HTTPClient.post behind the scenes to issue a POST request.

Back in Elixir, libraries such as mock are able to achieve the same effect. However, the underlying approach is fraught with problems. The mock library, for instance, works by recompiling and reloading "fake" modules to replace the real ones, even though that introduces undesirable runtime side-effects. For that reason, we can't run tests in parallel — while one test may set a mock over a fake module, another test running in parallel may expect to see the real module but it won't.

And here's what's worst: when mocking modules, this approach couples our application to certain dependencies and severely limits what it can do. In the Ruby example above, we can't change the HTTP client for a different one, even if it sports the same interface and behavior.

Mocking Without Libraries: Spies

Mocks are particularly handy to test actions involving side-effects and no return values, such as telling the database to insert a record for us. The "mocking" way to test such a piece of code is to assert whether the database function has been called with the expected arguments.

Libraries such as "mock", however, resort to code reloading and stateful processes to keep track of whether a module has been called. How do we bypass these tricks then?

If we focus on the most basic ability of a mock, asserting whether something has been called, we can achieve our goal using message passing along with stubs, via dependency injection or compile-time injection. And that leads us to spies:

Spies are stubs that also record some information based on how they were called. One form of this might be an email service that records how many messages it was sent.

In Elixir, the easiest way to achieve a mocking effect is with a spy: a stub that we can pass around and that sends a message to self when called. The message can then be verified by the test framework to make sure the function has in fact been called.

A more traditional mock arrangement would require separate processes to keep track of function calls.

Spies in Practice

Let's return to the Output module initiated in the last section.

It's not advisable to open a file and leave its underlying process eating up system resources. For that reason, we will implement an Output.close counterpart to Output.open. While the former function is an abstraction over File.open, the latter will be a layer on top of File.close.

But how can we test that Output.close manages to call File.close? One of the ways is to have a stub send a message to the current process: self.

Our FakeFile.close function needs to sport the same interface as File.close, but our stub implementation will send a message to the current process, carrying along any arguments it receives:

defmodule FakeFile do
  def open(device, [:write]), do: {:ok, device}
  def close(device), do: send self(), {:close, device}
end

Our test process can then verify whether the function has been called with the expected arguments in its mailbox:

test "creates an output csv with the given headers" do
  fake_path = new_fake_path()
  headers = ~w(name url)

  {:ok, handle} = Csv.Output.open(fake_path, headers, FakeFile)
  Csv.Output.close(handle, FakeFile)

  assert fake_file_contents(fake_path) == "name,url,errors\n"
  assert_received {:close, ^handle}
end

First things first, we've changed the Output.open call to grab a handle to the opened file. That handle is then passed to the close function because it needs to know what to close. Finally, we assert that the {:close, ^handle} message is sent to the current process by FakeFile.close. FakeFile is also passed to Output.close along with the handle. Notice the pin operator (^handle) to make pattern-matching against the handle effective.

Now here's the implementation of Output.close to make the above test green:

defmodule Csv.Output do
  def open(path, headers, file_mod \\ File) do
    {:ok, handle} = file_mod.open(path, [:write])
    headers = (headers ++ ["errors"]) |> Enum.join(",")
    IO.binwrite(handle, headers <> "\n")
    {:ok, handle}
  end

  def close(handle, file_mod \\ File) do
    # Calls the close function with the handle
    file_mod.close(handle)
  end
end

It's possible to refactor this code into a single open function that executes a callback and then closes the file handle. However, that is left as an exercise for the reader.

Integration Tests

Test doubles are great to keep your unit tests fast, focused, thorough, and reliable, but you should in no way skip integration and end-to-end tests. For every code path hitting external resources, I suggest that you have at least one integration test exercising the whole set to make sure your units are wired up correctly and your system works as it should. This is what the Testing Pyramid strategy is all about.

Conclusion

I hope this post has clarified your view on test doubles. We can combine all these techniques to achieve very powerful tricks. In functional languages such as Elixir, it's best practice to use test doubles only when strictly necessary, which is the case for I/O bound code that needs to reach out to external resources.

There's a great library called mox that allows setting mock expectations on an ad-hoc basis, but it forces developers to define behaviours over test doubles. While this is a great practice in most cases, it can be more convenient to skip formalization - if you find yourself defining behaviours for built-in modules - and stick with ad-hoc stubs such as the ones we've presented in here. The interface your stub manages to fake out will likely remain stable in that case, anyway.

The code developed in here can be found in this GitHub branch. It branches off the previous tutorial, where you can also find instructions for running these tests on Semaphore.

If you have any questions or comments, feel free to leave them in the comment section below.

Dd4ff5e662529aecee83446a84544cc9
Thiago Araújo Silva

A full-stack oriented Rubyist, Elixirist, and Javascripter who enjoys exchanging knowledge and aims for well-balanced and easy-to-maintain solutions regarding product needs. Frequently blogs at The Miners.

on this tutorial so far.
User deleted author {{comment.createdAt}}

Edited on {{comment.updatedAt}}

Cancel

Sign In You must be logged in to comment.