21 Dec 2022 · Software Engineering

    A Developer’s Guide to Terraform

    10 min read

    Infrastructure has traditionally been configured through user interfaces or even command line tools. This is quick and easy, but often results in infrastructure that can be challenging to reproduce or make safe changes to. Infrastructure as code solves these issues but introduces some complexity that can be intimidating to overcome initially. Infrastructure as Code (IaC) tools like Terraform, however, are not as difficult to learn as some people think, and knowing how to use them can be a great boon to your career. We’re going to take a look at the benefits of Terraform in this article, as it is one of the most popular IaC tools available.

    Terraform has become almost synonymous with the Infrastructure as Code (IaC) movement. Infrastructure being historically managed through a user interface presents a number of challenges that can be solved by provisioning through code. Creating and managing infrastructure with code comes with countless benefits in risk reduction and productivity.

    Terraform from Hashicorp is a platform-agnostic tool that defines infrastructure in human-readable code. While Terraform is not the only IaC tool, it has quickly become the most popular due to its extendability and shallow learning curve.

    Why infrastructure as code is important

    Creating cloud resources or setting up monitoring with a simple configuration file provides enormous benefits for engineering or infrastructure teams. Provisioning infrastructure with code offers many ways to reduce risk, i.e. increase stability, including:

    • Traceability
    • Repeatability
    • Automation
    • Documentation

    Traceability is the most immediately evident benefit of using Terraform to manage infrastructure. As is the best practice, Terraform code is usually tracked with git. When someone wants to make a change to the way infrastructure is configured, they open a pull request and may get reviews from their peers. When an infrastructure change goes wrong, the problem-causing change is as easy to find as any other piece of erroneous code. Having a clear changelog for all infrastructure reduces risk and creates confidence for any team.

    Furthermore, another obvious benefit to creating and managing infrastructure as code is repeatability. Having every monitor ever created for your microservices platform, precisely defined in code, allows for easy recreation of your monitoring setup should anything ever happen to your existing setup.

    Yet another advantage of infrastructure as code is that the actual steps for creating infrastructure are automated away. Declaring infrastructure in close-to plain english is less tedious and error-prone than manual creation of infrastructure. Terraform code describes the end state of the infrastructure after creation or change, not the instructions for making the change. The provider turns the declarative code into imperative instructions compatible with the interfacing API. This is both convenient and safe because it removes manual and error prone processes from managing engineering resources.

    Infrastructure as code also serves as documentation. Oftentimes, non-developers can look at Terraform and have a good idea of what’s going to be produced. This can be abstracted enough where it becomes challenging to interpret, but digging into the code reveals exactly what will be created when it is run. In fact, running a plan locally before it is applied reveals exactly what will be created or destroyed when the plan is applied in production. Moreover, you can run the plan in a staging environment to directly observe the created infrastructure. Documentation about infrastructure being created at the same time as the infrastructure itself allows for teams to collaborate easier, even performing code reviews and creating pull requests to ensure that the best practices are followed.

    How Terraform code becomes infrastructure

    Terraform does not create infrastructure directly from code. HashiCorp calls the group of files that describe the infrastructure a “configuration”. Terraform configurations rely on a provider which interfaces with the desired client to create and manage infrastructure directly. A Terraform Provider creates an execution plan that explicitly describes what changes will be made. After reviewing the plan, developers and engineers can apply the plan to perform the operations specified in it.

    The Terraform community (including HashiCorp, its creator) have published almost 2,000 providers that can instantly be used to interpret and act on Terraform code. Some of the providers that are readily accessible today are:

    • Amazon Web Services
    • Azure
    • Splunk
    • Datadog
    • Google Cloud
    • Sentry
    • Github
    • (Many more!)

    If you have a need for Terraform to interact with something that doesn’t already have a provider, you can write one! Providers are written in Go using a Terraform library provided by HashiCorp. The provider will interface with a client library for whatever system you are working to interface with, and will make HTTP calls to that system’s API.

    Creating an AWS S3 Bucket with Terraform

    Creating cloud resources is one of many tasks Terraform can abstract away for teams. It’s relatively straightforward to define details for cloud resources, and we’ll use AWS S3 as an example (S3 = Simple Storage Solution).. This is cloud storage into which you can upload and download files of nearly any kind, both manually and via the API. Both Azure and Google Cloud have similar products, but for this example S3 will suffice.

    Setting up AWS Credentials

    If you don’t already have an account with AWS, you can quickly create an account for free. Once you’ve done that, or if you have an account already, you’ll need to create an IAM (Identity and Access Management) role for Terraform to connect with. In the AWS web console, search for and select IAM. From the left-side menu, select “Users” and then click “Create a new user”.

    Give it an identifiable name like terraform-provider, then opt for “Access Key – Programmatic Access” and move onto permissions. Here you will select what you would like Terraform to be able to do on your behalf with this IAM role. For this tutorial, you can search for “S3” and select “AmazonS3FullAccess”. Last, you can skip the tags and select “Create User”. Take note of your Access ID and secret key, because this will be the only chance you have to view them.

    Setting up Terraform on Your Machine

    If you haven’t already, you’ll need to install Terraform on your machine. This is easiest to do with homebrew by running:

    brew tap hashicorp/tap

    Followed by:

    brew install hashicorp/tap/terraform

    Writing a Configuration

    Like we mentioned earlier, configurations are what Terraform calls the code that defines a piece of infrastructure. Terraform configurations must be in their own directories, so create a new one for this project somewhere on your machine.

    In that directory, create a file called main.tf

    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = "~> 4.16"
      required_version = ">= 1.2.0"
    variable "access_key" {
      description = "AWS IAM access key"
      type        = string
      sensitive   = true
    variable "secret_key" {
      description = "AWS IAM secret key"
      type        = string
      sensitive   = true
    provider "aws" {
      region = "us-east-1"
      access_key = var.access_key
      secret_key = var.secret_key
    resource "aws_s3_bucket" "create-my-bucket" {
      bucket = "some-bucket-name-that-must-be-globally-unique"

    The first block that falls under the terraform key is the configuration/settings section. The second block, provider indicates what provider you want to translate your Terraform code into infrastructure, as well as settings for the provider. In this case, we specify the desired region in which AWS will create our resource. The last section specifies to the provider that we want an S3 bucket with a given name and default settings.

    Next, add a new file called testing.tfvars in the same directory, which will house your AWS credentials.

    access_key = "<aws_iam_access_key>"
    secret_key = "<aws_iam_secret_key>"

    Applying the Plan

    Next, initialize the directory by running:

    terraform init

    It’s a best practice to format your Terraform configuration, and you can quickly do this by running:

    terraform fmt

    The given configuration is already formatted, so you should see no change. You should also validate that your configuration will produce valid results with the specified plan, and you can do so by running:

    terraform validate

    Again, the given configuration is already valid, so you should just see a success message! All that is left to actually create an S3 bucket is to apply the plan by running:

    terraform apply -var-file=testing.tfvars

    As long as this runs without errors, you can visit the S3 section and view your newly created bucket!

    Terraform in CI with Semaphore

    Terraform can be run as part of a CI/CD pipeline on Semaphore! To do this, you’ll first need to create a repository in the directory with your terraform configuration:

    git init

    Next, create a .gitignore file and add the following to it:

    # Local .terraform directories
    # .tfstate files
    # Crash log files
    # Exclude all .tfvars files, which are likely to contain sensitive data, such as
    # password, private keys, and other secrets. These should not be part of version 
    # control as they are data points which are potentially sensitive and subject 
    # to change depending on the environment.
    # Ignore override files as they are usually used to override resources locally and so
    # are not checked in
    # Include override files you do wish to add to version control using negated pattern
    # !example_override.tf
    # Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
    # example: *tfplan*
    # Ignore CLI configuration files

    Stage and commit this file before committing anything else.

    Next, stage your code, commit it, and push it to a new private repository on Github.

    Finally, sign up for semaphore – it’s easiest if you sign up with Github and save yourself some Github configuration later. In this article by Michał Majewski from Pretius you can read more about connecting Terraform with GitHub. The hobby plan will work for this example.

    If you have extracted your AWS secret key into an environment variable, now is a good time to set that in Semaphore at the organization level. Next, select “Create New” in Semaphore, and connect your Github repository. When setting up a workflow, select the standard Ubuntu single job environment, as it comes with Terraform preinstalled.  Next, just configure your CI job with Semaphore to run terraform plan on pushes to master, along with anything else you might want!

    Everyone wins when developers aren’t afraid of Terraform

    Developers are often not completely responsible for creating and managing infrastructure, but everyone benefits when they’re comfortable doing so. Many developers are apprehensive about Terraform due to the mild complexity it introduces over UI or command line tools. Developers commonly find the learning curve a bit steep, especially when learning the correct use of new Terraform providers. Site reliability engineers often advocate or even mandate that infrastructure or even monitoring tools are configured with Terraform, so having familiarity enables developers to better participate and collaborate. The benefits of provisioning infrastructure with Terraform are clear and the learning curve is not nearly as steep as it seems, so don’t be afraid to try it out!

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Writen by:
    Jeff is a Software Engineer writing code, fixing bugs, and helping patients get the medications they need to live healthy lives.
    Reviewed by:
    I picked up most of my soft/hardware troubleshooting skills in the US Army. A decade of Java development drove me to operations, scaling infrastructure to cope with the thundering herd. Engineering coach and CTO of Teleclinic.