Expected base image size and download times for the base image

:wave: The “Spin up environment” step for my workflow takes around two minutes to complete with most of the time spent downloading the Docker image.

I’m curious what the expected image size and download time for this step might be and if caching might help? The image tags for the base image are currently showing a 0.00B size while my download is showing 5.853GiB and expands to 13.36GiB.

The first steps in my workflow are similar to:

      - image: cimg/base:current
      - checkout
      - setup_remote_docker:
          docker_layer_caching: true

The output of the “Spin up environment” step (~2m) is:

machine-agent version 1.0.62513-953dcb61 (image: "ubuntu-2204:2023.10.1", public ip: "xx.xx.xx.xx", provider: "EC2")
Restoring DLC
running command: service [docker stop]
Warning: Stopping docker.service, but it can still be activated by:
  success after: 499.321587ms
running command: systemctl [stop docker.socket]
  success after: 3.576661ms
Downloading: downloaded 5.853GiB, expanded to 13.36GiB, ratio 0.44, after 2m0.486478802s.
running command: mount [-t ext4 -o loop,discard /tmp/.dlc.img /var/lib/docker]
  success after: 1.678114079s
running command: service [docker start]
  success after: 1.046429057s
running command: systemctl [start docker.socket]
  success after: 2.844492ms
Restore DLC success after 2m3.762383197s.
docker-agent version 1.0.13421-38ad8c7
Downloading docker-agent: success after 214.064156ms.

And the “Spin up container environment” step (~14s) outputs:

task-agent version 1.0.207522-d583bc78
Downloading task-agent: success after 318.007282ms.
System information:
 Server Version: 24.0.6
 Storage Driver: overlay2
  Backing Filesystem: extfs
 Cgroup Driver: systemd
 Cgroup Version: 2
 Kernel Version: 6.2.0-1012-aws
 Operating System: Ubuntu 22.04.3 LTS
 OSType: linux
 Architecture: x86_64

Starting container public.ecr.aws/eks-distro/kubernetes/pause:3.6
  image cache not found on this host, downloading public.ecr.aws/eks-distro/kubernetes/pause:3.6
3.6: Pulling from eks-distro/kubernetes/pause
23d07b917726: Pull complete 
8389103237f2: Pull complete 
Digest: sha256:c38d6dd4c0a53ccbf200f85a1057f273040e4ad4a29171e1cdcc0e4414f1b12c
Status: Downloaded newer image for public.ecr.aws/eks-distro/kubernetes/pause:3.6
Starting container cimg/base:current
Warning: No authentication provided, using CircleCI credentials for pulls from Docker Hub.
  image cache not found on this host, downloading cimg/base:current
current: Pulling from cimg/base
aece8493d397: Pull complete 
be5155afc6cb: Pull complete 
223d3e1f5d8e: Pull complete 
3e7300060c1a: Pull complete 
9fe9fc38ede2: Pull complete 
24bddf68070f: Pull complete 
75f60dda1c9f: Pull complete 
4f4fb700ef54: Pull complete 
Digest: sha256:9ab3549f5cf206b2ea252bf4ca2c8f3b62ae4a662ea7290d3ccb2e8774b134cd
Status: Downloaded newer image for cimg/base:current
  using image cimg/base@sha256:9ab3549f5cf206b2ea252bf4ca2c8f3b62ae4a662ea7290d3ccb2e8774b134cd
  pull stats: download 400.8MiB in 4.069s (98.49MiB/s), extract 400.7MiB in 10.925s (36.68MiB/s)
  time to create container: 1.101s
Time to upload agent: 235.071699ms
Time to start containers: 623.848113ms

There’s also a later step that builds a Docker image as part of the job which takes an additional ~30 seconds when not cached, but is usually cached successfully.

Any information on this download speed and size or tips for speeding it up would be so appreciated! There aren’t any problems with this speed as is either, I’m just hoping to shave off seconds as possible. Thanks! :smiley:

CircleCI does have a cache in place, for docker images and a store for personal images created by the docker layer caching feature. When I run the following config.yml the whole process takes 18s

version: 2.1 # Use 2.1 to enable using orbs and other features.

      - image: cimg/base:current

      - checkout
      - setup_remote_docker:
          version: 20.10.14
          docker_layer_caching: true
      - run: echo "hi"

The only difference is the fact that you have details held via the docker_layer_caching statement which results in the 5.853GiB saved image download in the “Spin up environment” step.

As such the performance is being impacted by the fact that you are saving the docker image you are building for reuse. It is not clear from what you have posted if this is what you are expecting to happen. As the image is now over 13GiB in size your config.yml may be extending the image every time you are running the job


1 Like

@rit1010 Thank you for the quick response and example :raised_hands:

The cache layer definitely seems to impact my image size since a following step has a docker build where some large npm packages are installed. I’ll look into reordering these Docker steps to improve caching!

I also checked the DLC project settings and am thinking deleting the cache contents might be helpful for testing changes. The confirmation modal makes me unsure of the safety with this though:

Confirm deletion of this project’s DLC cache contents

Only delete cache content if you if you are observing jobs with DLC continue to fail.

It’s my understanding that a flushing a cache is usually safe, but should I heed this warning?

It should be safe to delete the cache if all it is doing is speeding up a process that can be repeated anyway (but at a slower speed) without the cache being available.

If you do have a process that is dependent on information that can only be found in the cache you will need to refactor anyway as CircleCI does not offer any guarantees about the availability of cached data.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.