I have a dev Docker image used for local development. It has the right versions of everything I need to build my app, so I am using that as my primary docker image for circle.
The problem is that it downloads this image every time and does not seem to cache it locally. How can I make it cache my image locally?
As @timothyclarke has mentioned, in general we run builds over a large number of machines, and images will only be pulled from cache if they were previously used with that machine. The rate at which you will get a cache hit depends on how many other users are using the same image in their builds. If the image you are using is older or not used by many users, the chances of it getting pulled from cache is lower.
I would recommend trying to use one of our circleci or cimg namespaced images to increase the chance of restoring the image from cache. However I am not sure if this would be faster as adding steps to install your additional tools may take longer than using the image you are now.
I believe you could also create a new docker image based off of one of our images, and those layers from the CircleCI image would still have a higher chance of being cached.
OK, that makes sense. Is custom image caching something available with a paid plan? My goal is to have our code run in a production-like environment as much as possible, so using Circle’s images would diverge from that goal.
There’s a feature called “docker layer caching” that is available with a paid plan - however, I don’t think that this will help you with the setup you likely have currently. It can help if you change the job slightly (see below)
If you’re trying to do something like this (pardon the pseudocode):
jobs:
my-job:
docker:
- image: docker.io/my/custom-builder-image:latest
steps:
- run:
command: |
# everything here is done in the custom builder image
...
then you’ll almost never get any caching, for the reasons that the others have mentioned (regarding VMs and such).
However, if you pay for a plan, and enable docker layer caching, and then change your setup to something like this:
jobs:
my-job:
docker:
- image: cimg/base:2021.07
steps:
- setup_remote_docker: # this allows you to run docker commands in the steps below
docker_layer_caching: true # this enables the caching you paid for
- run:
command: |
docker pull docker.io/my/custom-builder-image:latest # caching kicks in
docker run docker.io/my/custom-builder-image:latest /bin/bash -c '
# everything here is done in the custom builder image
'
...
You’ll more likely get the caching you’re looking for. I think there’s some more subtleties that I’m missing, but the gist of it should work
There’s some tradeoffs in time; starting up the remote docker eats up some time in itself. Depending on how large your builder image is, you may or may not see benefit