Large images are bad in two ways:
- they take up a lot of space on the disk. Docker daemon enforces an upper bound on the total size of images on your disk - if you rebuild the images frequently and they are large, you will end up cleaning up your cache very frequently.
- they take a lot of time to download. This is bad, because:
- not everyone has an unlimited broadband connection! It sucks when you have to download fat docker image using your cafe's shitty wifi or using your mobile plan allowance
- there are cases when bandwidth can become costly. If you host your images in a private repo at your cloud provider, it's quite likely that they'll charge you for egress bytes.
So - how big is it?
docker image list | grep <image name>
will list all available tags for your image with their uncompressed sizes. This is how much the image actually takes on your hard drive.
docker manifest inspect <image> | jq ".layers | map(.size) | add / 1024"
will output a single number: the total size in MB of compressed layers. This is how much needs to be downloaded over the network in case of layer cache misses (worst case).
Generic advice is all about understanding how docker image layers work. In general, each layer contains only a set of differences from the previous layer. This applies to not only adding and updating files, but also removing.
In order to keep your keep your layer sizes small, only persist changes which are relevant to functioning of the final container. Do not persist: package manager caches, downloaded installers and tremporary artifacts.
Common strategy to solving this problem is chaining installing and cleanup commands as a single layer definition.
instead of doing this:
FROM ubuntu:20.04 # 66 MB RUN apt update # 27 MB RUN apt install -y curl libpq-dev # 16 MB # total size: 109 MB
FROM ubuntu:20.04 # 66 MB # removing apt cache after successful installation of dependencies RUN apt update \ # 16 MB && apt install -y curl libpq-dev \ && rm -rf /var/lib/apt/lists/* # total size: 82 MB
Another common approach to solving this problem is to use multi-stage build. Generally, the idea here is to perform some potentially heavy operations (e.g. compiling a binary from sources) in a separate layer, and then to copy just the resulting artifacts into new, "clean" layer. Example:
# Stage 1: build the Rust binary. FROM rust:1.40 as builder WORKDIR /usr/src/myapp COPY . . RUN cargo install --path . # Stage 2: install runtime dependencies and copy the static binary into a clean image FROM debian:buster-slim RUN apt-get update \ && apt-get install -y extra-runtime-dependencies \ && rm -rf /var/lib/apt/lists/* COPY --from=builder /usr/local/cargo/bin/myapp /usr/local/bin/myapp CMD ["myapp"]
Since every package manager has it's own arguments and best practices, the easiest way to manage this complexity is to use a tool which can find quick-wins in your Dockerfile definition. Open source hadolint analyzes your Dockerfile for common issues.
If you've been developing with your Dockerfile for some time, there's a good chance you experimented with different libraries and dependencies, which often have the same purpose. The general advice here is: take a step back and analyze every dependency in your Dockerfile. Throw away each and every dependency which is not essential to the purpose of the image.
Another very effective approach at elliminating cruft is to examine the contents of the image layers. I cannot recommend enough dive CLI, which lets you look at the changes applied by each layer and hints at the largest and repeatedly modified files in the image.
By examining docker build trace you may also identify that you're installing GUI-specific packages (e.g. GNOME icons, extra fonts) which shouldn't be required for your console-only application. Good practice is to analyze the reverse dependencies of these packages and look for top-level packages which triggered the installation of the unexpected fonts. E.g. openjdk-11-jre package bundles suite of dependencies for developing UI apps. Some tips on debugging the issue:
- just try removing the suspicious system package. Package managers should resolve the unmet dependencies and suggest potential top-level offending package as scheduled for removal.
- use tools like
apt-rdepends <package>to see a flattened list of packages which depend on the
Choosing the base image is one of the very first decisions people make when creating new image. It's quite common that the choice is affected by e.g. some example image on github, or what's already being used among your colleagues or organization.
Changing the base image can be quite significant to the overall image, but sometimes it's definitely worth revisiting that. Here's a quick walthrough of common points for consideration:
- choosing a
-slimimage variant. Some distros (e.g. debian) offer a stripped-down version, with features which are redundant to most images removed. For non-interactive console applications, you usually do not need documentation and man pages.
alpineis a container-optimized linux distribution, which uses it's own package manager (
apk) and alternative implementation of
musl. The final images are usually very small, although there are some limitations:
- in case your application relies on
libc, you will need to recompile it for
- own package repository is sometimes lacking in comparison to
debianand the versions may not be updated as frequently
- relatively small community support
- in case your application relies on
distrolessimages - the slimming down approach taken to extreme, with virtually all non-essential components (including shell!) of the operating system stripped out from debian images. Conceptually, the application and all it's dependencies should be copied into the distroless image from a previous image stage (see example Dockerfile, more usages in kubernetes)
- could be a good choice if your application is statically linked or has a well-defined set of dependencies (think: limited set of shared libraries).
Common issue with rapidly evolving codebases is that the images are not specialized - image serves as a common execution environment for a very distinct use cases. For the purpose of this section, let's call these "furball" images.
Best practice is to identify the key use cases of the particular components and build dedicated images for these use cases. This way you can encourage better separation of concerns in your codebase.
Assuming all the images are retaining all their dependencies, in best case you will experience summed size of all specialized layers to be exactly the same as the "furball" image. But, the benefits are substantial:
- specialization may encourage deeper cleanup of the dependencies
- specialization may encourage bigger changes in the organization of the codebase, like factoring the code into components.
- running specialized workflow will require fetching way smaller image
- Since you usually don't work on all components at the same time, you'll experience significant improvement in quality of life - you'll be downloading just the dependencies used by your component!
There were multiple attempts to minify the docker images by analyzing the runtime usage of files within the container. How this is supposed to work:
- spin up the container with a sidecar container tracking every access to a file inside the filesystem
- Perform actions on the running container covering all of the use cases of the image.
- Let the tool export the image containing just the files from the filesystem which were accessed in previous step.
The results of this workflow can be very good, although your results may vary depending on the use cases. Some points for consideration:
- the image is no longer a result of
- the slimming down workflow may remove some files from the image which can affect the functionality of the image or it's security => how much do you trust the tool that the image is secure and correct?
- image may not be suitable for any other use case than the one used in the slimming down process. This may be a showstopper for the images used for experimentation.