Spin up Environment, Docker Cache?



So sometimes we see this step complete instantly, and sometimes it downloads all the layers again, even though the images have not changed at all.

Our config looks like this:

version: 2
      - image: image1:version
      - image: image2:version

As you can see, we use docker with multiple images which are stored publicly, and they haven’t been changed between builds.

Is this because between tests we run on a new machine which doesn’t have the image, and sometimes we run on a machine that has downloaded the images before?


Completely accurate.


Are their plans to cache this download in future? We see ~40 seconds to spin up the environment from docker images.


I don’t know of any plans to change the way we are currently caching. The downloads are cached, just not on hosts where you haven’t had a build run.


When we started using CircleCI 2.0 we had this experience, most of the builds were fast with spinning up the environment, some were slower due to the fact there was no cache. We haven’t changed the images we’re using in months but nowadays every single build is downloading the images, the cache never seems to be there anymore. This is true of our own image as well as generally used selenium and elasticsearch images in our builds. And we’re generally running dozens of builds a day, so how many machines are there that we never find a cache anymore?

It often takes up to 2 minutes now to spin up an environment with 3 images. We’ve spent a lot of time making CircleCI 2.0 work for our projects because of the promise it would be faster but that is no longer the case.


We are seeing this too. Having 5 jobs in a workflow and having each one spin up for 40 seconds wipes out any speed increase we got from upgrading to Circle 2.0


Have you looked at workspaces? I believe once you have set up your build, you can share it between consecutive jobs.


Workspaces just store and apply file system contents between jobs. Still each job needs to spin up which involves downloading the docker image and that takes time.


Gotcha. Are you using the CircleCI-native system of one primary and several secondary images under the docker key in your config? If so, I wonder if moving to Docker Compose would help - you’d be able to handle the images manually, and thus (I assume) you would be able to store them in a workspace.

I use DC myself and it works very well inside a single CircleCI build container (it is essentially Docker in Docker). I pull about ~950M across ~10 images in about 50 seconds, if memory serves correctly.

Alternatively, and at the risk of stating the obvious, reduce the size of your images. Have you got an image listing to hand, so readers can see how many images you have and the M size of each?


We do spin out multiple jobs in workflows. We use the same docker image in each job. The image is alpine based and is 490MB. Sometimes the environment spins up for 3 minutes and sometimes it spins for 40 seconds. Since we are using the same image over and over again it could be nice if we can cache it somewhere. Since the “Spin up Environment” job is first we can’t do that using the save_cache/restore_cache steps.


May I draw your attention to my suggestion of Docker Compose again? I think maybe I didn’t emphasise it enough, and I think it is worth considering. Or, perhaps, your response was a objection to my suggestion, and I just did not understand it. I am happy to hear if that is the case. Presently, I am (mis)understanding your reply as not replying to what I had suggested, so it has rather confused me.

In your first job, you could pull your own image manually, using docker pull. From there, you can use it in your own Docker Compose, which you would need to install yourself. You could then declare that the folder holding this image is a workspace or, as you say, use the cache instead.

In the subsequent jobs in your workflow, you restore the Docker image from the workspace/cache, so it does not need to be downloaded from a registry again. Of course, the process of decompressing a 490M file may be more time-consuming than I would think, in which case the proposal is not workable. However, perhaps it could be researched?


We almost never change our base CI images, we’ve run thousands of builds with them, and they still download from scratch just about every time. Maybe one time out of twenty I see a build use an already-downloaded image, but that rate hasn’t climbed over time the way one would expect if it were just a matter of hosts needing to warm their caches.

Is CircleCI constantly spinning up and shutting down hosts (autoscaling group, etc.) such that very few hosts stick around long enough for us to hit them twice?

Downloading the Docker images takes significant enough time that we ended up making our workflows much less granular just to avoid the 30-60 seconds of image download time on each step. It’d be great if CircleCI could find some way to address this.


This is correct - sometimes you’ll hit a host you’ve used before and it has the image already downloaded, often it’s a totally new host. If desired, you could open a support ticket to request that a trial for docker layer caching be added to your account.


Would Docker Layer Caching help here, though? The web page about it specifically says, “DLC does not speed up downloading of the Docker images used to run your jobs.”


You’re correct - DLC will not help on the base docker executor with just pulling in your images. It only helps with building images.

EDIT: We just updated our docs on this topic: https://circleci.com/docs/2.0/docker-layer-caching/


Is this on the roadmap? pulling down from quay.io our custom image is now the slowest thing we do by far in every workflow step.


No. We use Nomad to delegate jobs and it has no concept of what images are cached where, it just looks for healthy machines with enough resources for your container.

It’d be neat if we could solve this, though. It’d save everyone a lot of time in their builds.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.