You need Docker images to build and ship Docker containers. An image is the base of your container. Thus, keeping images slim and light speeds up the build and deployment of containers. Optimizing Docker images should be practiced in your containerization workflow. Your Docker image size matters for reasons such as:
- Faster deployments time to download, transfer and load into the container runtime, and improved team productivity and application performance.
- Better storage utilization on your machine.
- Reduced network bandwidth when transferring between hosts and container orchestration environments.
- Reducing image size and removing unnecessary files eliminate vulnerable components that expose images to security issues.
- Build and portability efficiency speeds up the build processes and improves resource usage.
This post discusses best practices and strategies for slimming and reducing Docker image size.
Best strategies to slim Docker images
Below are strategies you can use to help create slim Docker images.
When building a Docker image, you write instructions using a
Dockerfile. Assuming you are running a Node.js application, below you can see an initial example of a
Dockerfile that packages and builds the image:
This is a pretty simple example. The Dockerfile commands are straightforward to follow. It implements a single-stage build scenario in Docker.
# Node.js base image FROM node:19 # Docker working directory WORKDIR /app # dependencies copy command COPY package.json ./ COPY package-lock.json ./ # Install dependencies on Docker RUN npm install # Copy application files COPY . . # Application start scripts EXPOSE 4000 CMD npm start
This script will be enough to get a simple hello world Node.js application running on Docker. However, it creates a rather large image, even for the smallest codebase. You should expect this size to increase as the application size grows.
Things get more complex if you have multiple environments — you end up maintaining many Dockerfiles and many images. It can get expensive in the long run to maintain and rebuild development and production images.
Multistage builds allow you to slim Docker images. This allows you to define multiple stages in your Dockerfile to represent the different stages of the build process, using multiple
FROM commands. Each represents a different stage of the Docker image build process. In this case, every stage you create has a different base image that executes different commands to build and package your application.
The final image of your application is created by copying code files and dependencies from the previous stages. This means Docker will discard any intermediate files and build artifacts that are no longer needed to create your final build.
#Stage One: Development Application FROM node:19-alpine AS base WORKDIR /app COPY package*.json ./ RUN npm install COPY ./ CMD npm start #Stage Two: Create the Production image FROM base AS final RUN npm install --production COPY ./ CMD npm start
This approach is well articulated when you have an extensive application with development dependencies that you don’t want to run in a production environment. However, you can still use the multistage build to target a development application image. A
-target flag specifies the
base stage. This way, you can still run your development Docker image using the following command as an example:
docker build --target base -t image_example .
This will only execute the
base phase, allowing you to isolate different stages in your Dockerfile.
The benefits of multistage builds are:
- You end up with a slimmer image.
- You don’t have to create different Dockerfiles for development and production purposes.
- You only have to maintain one file. The Multistage build approach combines the
Dockerfileenvironments and creates a single artifact ready for production builds.
Choosing base images
A Docker base image creates the foundation for structuring your Docker images. They provide pre-built images with the tools and libraries required to run your applications in containers. Docker provides different variants of such base images optimized for specific use cases.
While using the above Node.js example, the image runs on node:16 as the base image. Node.js has various optimized image versions. Each image has tags for the respective Dockerfile.
The above base image is set to
node:<version>, where the version is the Node.js version you want to use. This versioning instructs Docker to pull your base image with all package Node.js artifacts. As a result, you will end up creating images with large disk sizes.
However, you can opt to use other versioning variant alternatives as distribution base images, such as:
node:<version>-alpine– This variant uses Linux Alpine distributions to ensure that your base image is 5 MB in size. Here is an example of an alpine image:
# You Node.js base image FROM node:19-alpine3.16 # Docker working directory WORKDIR /app # dependencies copy command COPY package.json ./ COPY package-lock.json ./ # Install dependencies on Docker RUN npm install # Copy application files COPY . . # Application start scripts EXPOSE 4000 CMD npm start
node:<version>-slim– Node.js provides this variant to only the common packages needed to run Node.js, thus providing even smaller base images. The slim variant will consequently reduce Docker image size as follows:
# You Node.js base image FROM node:19-slim # Docker working directory WORKDIR /app # dependencies copy command COPY package.json ./ COPY package-lock.json ./ # Install dependencies on Docker RUN npm install # Copy application files COPY . . # Application start scripts EXPOSE 4000 CMD npm start
The choice of the base image predicts the size outcome of your images. Always check the tags and variants your base image provides to reduce base image size. Make sure to check, however, that the variants you use include the right dependencies for your application.
Docker image layers
A Docker image is sectioned into layers. Layers are created based on how you write your Dockerfile: every Dockerfile command or line creates an image layer of its own. These layers are the Docker image file system.
Take this as an example:
FROM node:16 WORKDIR /app COPY package.json ./ COPY package-lock.json ./ RUN npm install COPY . . EXPOSE 4000 RUN npm start
Every Dockerfile command, e.g. FROM, WORKDIR, COPY, RUN, COPY, EXPOSE, and RUN, creates an image layer of its own. The layers are then used to assemble the final image. The FROM command creates the first layer, followed by the other commands in series.
Over time, as you add more commands, the image will accumulate layers and increase in size. You should always prune image layers to produce slim, optimized Docker images. This will compress Docker commands.
Here, there two COPY commands that can be slimmed down as follows:
FROM node:16 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 4000 RUN npm start
Adding a wildcard tells Docker to copy both package.json and package-lock.json under one layer.
Below is another example that shows you how to combine RUN commands. This image clones the code from GitHub; thus, it requires Git to be installed in the container.
FROM node:19-alpine RUN apk add --no-cache git RUN mkdir -p /app RUN chown node:node /app WORKDIR /app USER node RUN git clone https://github.com/Rose-stack/node_app.git . RUN npm install EXPOSE 4000 CMD npm start
It can be effectively slimmed as follows:
FROM node:19-alpine RUN apk add --no-cache git && \ mkdir -p /app && \ chown node:node /app WORKDIR /app USER node RUN git clone https://github.com/Rose-stack/node_app.git . && \ npm install EXPOSE 4000 CMD npm start
While these are very simple examples, I hope you understand the concept of reducing the image layers by combining commands that have similar operations.
Combining this approach with a multistage build and an optimized base image will significantly reduce Docker image size. We can, however, slim images even further!
Not every file and folder needs to be copied to the Docker container. Using commands such as
COPY . . instructs Docker to copy all files and folders from your local directory to the Docker container.
For example, when creating a Node.js image, you have folders and files such as
npm-debug.log. Docker does not need them, and the
build commands will create them. Thus, copying them over to Docker will increase your image size. To avoid that, in your Dockerfile root directory, always create a
.dockerignore file so Docker can ignore them. Furthermore,
.dockerignore can also be used to avoid copying files with sensitive data, such as passwords, into the image.
Leverage compression tools
Combining all the above practices, is guaranteed to help reduce Docker image size, but we can do even more. To further reduce size, you can leverage compression tools and slim Docker images. These tools include:
To reduce the size of your image, Docker Slim uses the build command. The build command functions as a Docker Slim compression method to minimize Docker image size. Let’s see how this works using a typical Docker image example. The following command pulls an Nginx command:
docker pull nginx:latest
Check its original size with the following command:
docker images nginx:latest
Use the following Docker Slim command to compress your image:
docker-slim build --sensor-ipc-mode proxy --sensor-ipc-endpoint 172.17.0.1 --http-probe=false nginx;
To confirm the results, check your image size again:
This shows that Docker Slim reduced the image from 135MB to around 12.4MB. This illustrates how Docker Slim can trim and compress your image by roughly 90.37% of its initial size.
Slimming and reducing the size of your Docker images improves your containers’ overall builds and deployments. Using the practices and strategies discussed in this post, you can create slim and trim Docker images that will provide faster build times and lower storage and transfer costs. You can use all the above strategies in every image to achieve the optimal size.