Suppose we have our base container image as cimg/ruby:2.5 and we want to install some additional packages on top of that and then use that built environment for many subsequent parallel testing containers.
Is it possible to utilize docker layer caching for that? The documentation was not super clear on that point. It seemed to indicate that if we’re building a docker image as part of our our ci process, that it would be quite beneficial to use DLC, however we don’t need the image for any external purpose, only for our ci testing. Furthermore since our environment is inside a docker container, I wasn’t sure whether DLC was something we could utilize for a base image.
My understanding is that the setup would actually be
- job: build ~ using base image cimg/something (doesn’t matter)
=> invoke setup_remote_docker
=> invoke “docker build”
DockerFile
→ from cimg/ruby:2.5
→ install packages and setup environment
=> save the docker image as “custom” - job: test ~ using cimg/something (doesn’t matter)
=> setup_remote_docker
=> run the “custom” image in remote_docker
=> trigger tests within the remote_docker ??? (or something like that)
What I’m envisioning is something like:
- job: build ~ using base image cimg/ruby:2.5
=> install packages and setup environment
=> Save current container as “custom” - job: test ~ using base image “custom” from the previous build
=> run tests
Is the only way to achieve this, by uploading our compiled image to dockerhub (or equivalent)? Because we absolutely do not need the image available externally, and plus it seems that pushing/pulling instances from an external source adds significant time. This seems like what workspaces should be for, right?