Speed up Docker Spin up Environment?

We are using docker spin up environment. We have our own test image, so the layers are not cached. It takes at least 1 minute to spin up environment every time. Sometimes it takes 5 minutes. With v2.0, we split our original test to 5 different jobs. It’s supposed to save time, but small jobs still need time to spin up. As a result we went from a pretty fixed 25 minutes test suite in v1 to a range of 18mins - 26mins test suite in v2. If there’s a way to cache that spin up environment, it’d be much more stable, and likely to move down to 12-13mins fastest.

I know one way is to use the official circle docker images to take advantage of cached layers, but it’s almost against why we customize our own docker image in the first place. Our test image is built to be consistent with our production image. The official circle ci image has different libraries, and it doesn’t quite make sense to use the official image and risk breaking our code in production.

The other way is using docker_layer_caching — pulling a base image and have circle cache the layers. Is this the best option right now?

On the other note, if we need spin up environment with a docker base image, and enable docker_layer_caching, shouldn’t we just have an option to cache docker layers inside spin up environment right away?

Do (or could) all of your five jobs run on the same base image? If so, and if the jobs are in the same workflow, you could use workspaces to share folders of data between them, so that only the first job has to build the environment.

Alternatively, if the images need lots of things installing, then you could create a separate pipeline to build your base image, and just rebuild it weekly. For this approach, push it to an external (public/private) registry and then pull it in CircleCI. It is also a good idea to use a lightweight OS (such as Alpine) in order to reduce the size of images (100M is much better than 1.5G!).

Finally if you are building images and want a variety of layer caching options, consider this post.