on
Faster Docker Builds: Practical Guide to BuildKit cache, buildx, and image optimization
Slow Docker builds in CI waste developer time and CI minutes. BuildKit (via docker buildx) gives you powerful ways to persist and share build cache across runs and machines — if you use them properly. This guide explains the most useful BuildKit cache features, shows practical Dockerfile and CI examples, and lists the small changes that usually yield the biggest speed wins.
Why this matters
- CI runners are usually ephemeral: no local cache, so every build starts “cold.”
- Network-bound package installs (npm, pip, apt, go modules) are the common bottleneck.
- Proper caching can turn multi-minute rebuilds into seconds by reusing previously downloaded artifacts and intermediate layers.
Overview of BuildKit caching
- BuildKit keeps a local build cache by default on the machine running the build.
- To reuse cache across CI jobs or machines you must export it to an external backend (registry image, inline in the image, GitHub Actions cache, local dir, etc.) and import it on later builds. (docs.docker.com)
Key cache backends (quick)
- inline: stores cache metadata inside the image itself. Easy to start with; good when you push the same image you build. (docs.docker.com)
- registry: stores cache in a separate image on your registry. Supports caching intermediate stages (mode=max) and separation of cache vs final artifact. Use this for complex builds or when you want max cache coverage. (docs.docker.com)
- gha (GitHub Actions cache): an exporter that stores cache in GitHub Actions cache — convenient on GitHub-hosted runners. Experimental and subject to GitHub limits; some setup needed. (docs.docker.com)
Practical CI recipe (buildx + registry cache)
- Build and push an image and export cache to the registry at the end of the job:
docker buildx build --push \ -t my-registry.example.com/my-app:latest \ --cache-to type=registry,ref=my-registry.example.com/my-app:cache,mode=max \ . - In subsequent CI runs, import that cache before building:
docker buildx build \ -t my-registry.example.com/my-app:latest \ --cache-from type=registry,ref=my-registry.example.com/my-app:cache \ --cache-to type=registry,ref=my-registry.example.com/my-app:cache,mode=max \ --pushThis pattern gives CI builds access to previously produced layers and intermediate-stage outputs, saving network time and rebuild work. The registry cache supports parameters like mode (min vs max), compression (gzip/zstd), and OCI media-type options for compatibility. (docs.docker.com)
GitHub Actions example (using docker/build-push-action)
- With GitHub Actions you can use the gha cache backend or inline/registry cache with buildx. Example using the build-push Action and gha cache: ```yaml
- uses: docker/setup-buildx-action@v3
- name: Login to registry uses: docker/login-action@v3 with: registry: ghcr.io username: $ password: $
- name: Build and push
uses: docker/build-push-action@v6
with:
push: true
tags: ghcr.io/my-org/my-app:latest
cache-from: type=gha
cache-to: type=gha,mode=max
```
Note: the gha backend is experimental and has GitHub cache limits; when available it’s a convenient option on GitHub-hosted runners. (docs.docker.com)
Use cache mounts in your Dockerfile (the single biggest win)**
- BuildKit’s RUN –mount=type=cache keeps package manager caches (npm, pip, go build caches, apt downloads) between builds even when the layer changes. This avoids re-downloading and dramatically speeds rebuilds.
- Examples:
- npm:
FROM node:18 AS build WORKDIR /app COPY package*.json ./ RUN --mount=type=cache,target=/root/.npm \ npm install COPY . . RUN npm run build - pip:
FROM python:3.11 AS build WORKDIR /app COPY pyproject.toml poetry.lock ./ RUN --mount=type=cache,target=/root/.cache/pip \ pip install --upgrade pip && pip install -r requirements.txt - apt (cache lists and downloads):
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \ apt-get update && apt-get install -y build-essentialCache mounts are persistent across builds (when using a cache exporter/importer) and are far more robust than trying to copy ~/.npm or pip cache around manually. (docs.docker.com)
- npm:
Multistage builds and layer ordering
- Multistage builds reduce final image size but can also improve cache hits if you structure them well:
- Put rarely changing steps earlier (e.g., OS packages, language runtime), and frequently changing file copies (your source code) later.
- Separate dependency installation into its own stage so changes to source don’t invalidate dependency layer.
- Use .dockerignore to avoid copying large irrelevant files (node_modules, .git, docs) into the build context — this avoids unnecessary cache invalidation.
Cache modes: min vs max (tradeoffs)
- mode=min (default for some exporters): only layers that are part of the final image are exported. Smaller cache, faster export/import, but may miss intermediate artifacts.
- mode=max: export all intermediate layers (better cache hit rate for complex builds) but produces bigger caches and longer export/import times. Test both for your build to find the sweet spot. (docs.docker.com)
Practical tips and pitfalls
- Secrets: never COPY credentials into layers. Use BuildKit’s –secret mechanism (docker buildx build –secret) to pass secrets at build time; cache exports can otherwise end up containing leaked data. The docs warn about manual secret handling causing leaks. (docs.docker.com)
- Cache key lifecycle: caches are overwritten when you write to the same cache ref. If you want isolated caches (per-branch or per-PR), include branch or commit identifiers in the cache ref name.
- Registries and ECR: some registries don’t support OCI image indexes — for those you may need to set image-manifest=true when exporting cache to the registry. Compression options like zstd can speed cache transfer. (docs.docker.com)
- GitHub cache API limits: the gha backend may be throttled or have eviction policies. If you see timeouts you may need to provide a token or switch to registry-based caches. (docs.docker.com)
Checklist to get started (copy-paste)
- Enable buildx in your CI (docker/setup-buildx-action or install buildx).
- Add cache mounts in Dockerfile for package managers where applicable.
- In your CI workflow:
- Import remote cache via –cache-from (registry or gha).
- Build with buildx and export cache via –cache-to.
- Test mode=min vs mode=max; measure build times and network transfer.
- Protect secrets by using –secret and never embedding creds in files or ARGs.
Minimal example pipeline (shell)
# First build (cold): push image and cache
docker buildx build --push -t registry/my-app:latest \
--cache-to type=registry,ref=registry/my-app:cache,mode=max .
# Next build (warm): import cache to speed up
docker buildx build --push -t registry/my-app:latest \
--cache-from type=registry,ref=registry/my-app:cache \
--cache-to type=registry,ref=registry/my-app:cache,mode=max .
Real-world impact
- In many setups, switching to cache mounts and exporting cache to a registry reduces dependency-install time from tens of seconds per run to single-digit seconds, and prevents repeated downloads across CI jobs. Try measuring a “cold” vs “warm” build time before/after — that gives a clear ROI for the effort. (See the earlier sections for the exact flags and Dockerfile patterns.) (docs.docker.com)
Further reading and references
- Build cache backends and options (inline, registry, local, gha) — Docker docs. (docs.docker.com)
- Inline cache quick guide and syntax. (docs.docker.com)
- Cache mounts and how they speed package installs. (docs.docker.com)
- Using cache exporter with GitHub Actions and caveats. (docs.docker.com)
Closing notes
Start with one small change: add a –mount=type=cache for your package manager and enable cache export/import in CI. That modest change is where most teams see the biggest build-time drop with the least complexity. Once that’s working, experiment with registry vs inline vs gha cache exporters and the mode option to balance cache size and hit-rate for your workflow.