Does docker_layer_caching support buildx?

My CircleCI configuration has a job defined with a machine executor using docker_layer_caching. This currently works as expected using docker build. However, when moving to a custom builder to target multiple platforms, I do not see subsequent runs using a cache. The documentation for this option does not say anything regarding using buildx.

The main change in the job is:

docker buildx create --use --name myname
docker buildx build …

It’s plausible to me that a custom builder may need to be configured in some way to leverage the cache that CircleCI is meant to add. Is there any advice on what that could look like?

Thanks

[Updated]

Docker Buildx works with DLC on CircleCI if Docker contexts are not used.
Please see Brian’s sharing below!

hi @stackptr

Apologies for the late follow-up here!

I would like to share that indeed, I can confirm that docker buildx is not playing nice with Docker Layer Caching (DLC) on CircleCI builds;
I also could not get my own builds utilizing DLC when using docker buildx build specifically.

We also see this mentioned in the CircleCI aws-ecr Orb here:

In particular, the aws-ecr Orb has moved to using docker buildx for building images.
As such, developers using the current aws-ecr Orb (at 8.1.2) are also seeing DLC not utilized in their builds.

Unfortunately, I do not have an answer / solution for this yet!
Please allow us to reach out and confirm with our Engineering team on this, and follow up here :bowing_man:

If you have any findings, please let us know here too as well.

Thank you for your understanding in this matter.

As a follow-up, I would like to share solutions and some explanation around Docker Buildx and the Docker Layer Cache (DLC) feature on CircleCI.

Re: DLC on CircleCI

When you build a Docker image, its layers are saved within the Docker root directory.
The Docker root directory is typically /var/lib/docker for most setups.

However, builds on CircleCI (or any CI) are ephemeral.
The next build is likely on a different virtual machine (VM) and so we need a way to “pass” around the stored layers between builds.

Hence, layer caching on CircleCI works by storing and retrieving these layers in an external volume instead.
These volumes are attached to the remote machine running the Docker daemon.
The idea is that these volumes will likely include all, if not some of the layers as they get passed around.

More information: https://circleci.com/docs/docker-layer-caching#how-dlc-works

Re: How does this impact Docker Buildx

With Docker Buildx, developers can choose where the cached layers should be stored and retrieved.

There are many options, but this includes the local cache (folder), or a registry cache (i.e., saving the layers in a Docker repo on a registry like Docker Hub).

See the following options for the docker buildx build command:

Note that you do not need the DLC feature in order to pass around the cache storage between builds.

Example: Cache from local folder

jobs:
  build-push-local-docker:
    docker:
    - image: cimg/base:stable
    resource_class: medium
    environment:
    - DOCKER_REGISTRY: docker.io
    - DOCKER_USER: <user_or_org>
    - DOCKER_LOGIN: <login_user>
    steps:
    - checkout
    - setup_remote_docker:
        version: 20.10.14
    - run:
        name: Check Docker settings (for buildx)
        command: |
          docker version
          docker buildx version
          docker context inspect
    - run:
        name: Setup docker buildx
        command: |
          docker context create circleci
          docker buildx create --use circleci

          docker buildx ls
          docker context inspect circleci
    - run:
        name: Login to registry
        command: |
          # login credentials should be provided via context or project environment variables.
          echo $DOCKER_PASSWORD | docker login $DOCKER_REGISTRY --username $DOCKER_LOGIN --password-stdin
    - restore_cache:
        keys:
        - hello-buildx-docker-{{ arch }}-{{ .Branch }}-
        - hello-buildx-docker-{{ arch }}-
    - run:
        name: Docker buildx with local cache
        command: |
          docker buildx build --progress=plain \
            --tag="${DOCKER_REGISTRY}/${DOCKER_USER}/hello-buildx:${CIRCLE_SHA1}" \
            --cache-to=type=local,mode=max,dest=/tmp/dockercache \
            --cache-from=type=local,src=/tmp/dockercache \
            --output=type=docker \
            .

          docker image ls
    - save_cache:
        key: hello-buildx-docker-{{ arch }}-{{ .Branch }}-{{ checksum "/tmp/dockercache/index.json" }}
        paths:
        - /tmp/dockercache
    - run:
        name: Publish image
        command: |
          docker image tag "${DOCKER_REGISTRY}/${DOCKER_USER}/hello-buildx:${CIRCLE_SHA1}" "${DOCKER_REGISTRY}/${DOCKER_USER}/hello-buildx:local-docker"
          docker image push "${DOCKER_REGISTRY}/${DOCKER_USER}/hello-buildx:local-docker"
    - run:
        name: Prune cache
        command: |
          docker buildx prune

This solution uses the local cache option for --cache-from and --cache-to.
With the local cache strategy, we can take advantage of CircleCI’s restore_cache and save_cache steps to load and save the cached layers between builds!

Hope this helps! :nerd_face:

I have a sample repo showcasing the registry and local cache strategies:

2 Likes

Hey Everyone,

We just released the version 8.2.1 of the aws-ecr orb which contains a fix for using docker layer caching with docker buildx.

DLC only works with docker buildx when there is no docker context being used. The aws-ecr orb only uses a context to support multi-architecture builds. Therefore, the orb now removes the docker context if there is only 1 platform provided (linux/arm64,linux/amd64, etc), enabling the use of DLC

If more than one platform is provided, a context will be used but DLC is not supported at this time.

We’re currently working on adding DLC support for multi-architecture builds and it should be available in the upcoming months.

Thank you and hope that helps!

Brian

2 Likes

Hi @Brian . If we want to build using buildx, and want to hook into buildkit (e.g., for zstd building like docker buildx use zstd-builder), we do seem to need to use a context, otherwise we get an error could not create a builder instance with TLS data loaded from environment.

And if we create a context, DLC is not supported.

Is there a way around this? This is frustrating cause we seem to now have to pick between DLC or zstd builds.

Hey @sina,

From my understanding, DLC won’t work with the builder context that enables parallel multiarch builds.

Having said that, I am working on adding the docker buildx build command to the docker orb with the ability to specify a context.

I’m gong to have to test zstd to see if it works with docker layer caching but if it does, it using the docker orb should be a viable solution.

Let me check with the team to see if we can test ztsd with DLC and get back to you.

Thank you!

Brian

Thanks a lot Brian. I think the main issue is really to use zstd here, whether through buildx or not.

Zstd can have a big impact for many customers (e.g., AWS fargate startup times).

However, we don’t want this to come at a substantial increase in build times without DLC, and when using local caching (using docker’s caching mechanisms and then save-cache on circle), the job takes even longer, because more time is spent on saving and restoring caches, compared to the time it saves from the build time.

Would look forward to what you find out re. using zstd with DLC. many thanks.