Hi,
We are using a custom docker image hosted on a private docker registry as an executor in our ci. All works ok, but we are noticing it spends 1 to 1.5 minutes on every pipeline execution on downloading the image before the start of the pipeline, my question is how can we make sure the image is cached on future pipelines?
How are you defining the executor within your config.yml file, you will need to set the option
docker_layer_caching: true
If you expect to use your image frequently in the CI process you may find that uploading it to docker hub improves performance even if caching is enabled. This is because CircleCI has tight integration between their systems and Docker’s. In the past they have commented on the fact that both companies were operating in the same AWS region and that they directly accessed Docker’s S3 store.
I’ve also come across the following staff member reply that hints at how the caching process works.
As @timothyclarke has mentioned, in general we run builds over a large number of machines, and images will only be pulled from cache if they were previously used with that machine. The rate at which you will get a cache hit depends on how many other users are using the same image in their builds. If the image you are using is older or not used by many users, the chances of it getting pulled from cache is lower.
If they are caching based on usage you may find that your private image is not cached that frequently.
Circle is not caching any part of my custom but public Docker image
Thank you @rit1010 i was able to read more about this here https://discuss.circleci.com/t/caching-images-from-external-registries/41617
so it seems that there was no solution at the time, i am just wondering if anything changed since then. Unfortunately placing the image inside a public repository is not an option. Is there any info regarding what region CircleCI runner machines are located in?
I believe the regional info given in the following thread is still the most recently published.
(contains a link to docs and a confirmation from a staff member)
If you can not use dockerhub as a private repository, are you able to use a shared image cache downstream of the repository?
So according to that to that comment, we are already in the same region, so it looks like that does not help much, so then moving to dockerhub wont help either.
If you are already in the local region as you say moving the image around will not help much as the time taken to retrieve the image is going to be much the same regardless of its source.
One extreme option would be to look at a self-hosted instance. This is where you have a dedicated system or AWS instance that can run your workflows. As this is persistent it can cache images to its local storage for reuse. I use such instances myself as we run a small VMWare cluster and so have somewhere to deploy the instances without incurring extra costs or complexity.
Actually that sounds like a good idea, thanks @rit1010 , we will look into self hosted instances.