It takes approx. 17 seconds to spin up our custom image. That’s a lot of time!
Build-agent version 1.0.20555-d4df078e (2019-11-25T17:38:55+0000)
Docker Engine Version: 18.09.6
Kernel Version: Linux 796f557461c7 4.15.0-1052-aws #54-Ubuntu SMP Tue Oct 1 15:43:26 UTC 2019 x86_64 Linux
Starting container proxyco-docker-local.jfrog.io/lambda-pipeline-base:v8
image cache not found on this host, downloading proxyco-docker-local.jfrog.io/lambda-pipeline-base:v8
Can I lower the spin-up time somehow? Can I make sure the image is cached? Would moving the image to a public docker hub help?
Hi @BeyondEvil, caching is done per-machine, so subsequent runs on the same box should still have this image. That said, I can’t make any guarantees how long this custom image will be retained. Moving the image to the public Docker Hub shouldn’t have much of an impact on this
I suppose everything can be optimised, but 17 seconds isn’t that long. I guess if one is wanting to make maximum use of the free build tier, or if the main build job took only five seconds to run, then maybe 17 seconds would be worth cutting down. But otherwise I would say optimisation isn’t worth the engineering effort.
How big is this image? Some while ago I found that pulling from a GitLab registry ran at around 100MB/sec, which I consider an excellent speed. The only way to optimise that would be to reduce the size of the image - perhaps by moving to Alpine.
I think this has to with building images, correct? That’s not the problem I’m trying to solve. I’d like my primary build image, the one used in all the steps to be cached.
I suspect it might be. GitLab to CircleCI would do that in ~3 seconds, if you get the same speeds I did a couple of years ago. Are you in a position to try it?
You’ll get the fastest transfer speeds from docker hub, as our builder machines are in the same AWS region as their S3 storage, and the docker client is able to download directly from S3.
This should apply to private images as well as public ones.