How do I cache with docker-compose?

How do I cache with docker-compose?

I have this file, that I have basically stolen off a github repo when searching for “save_cache” together with “docker-compose” - but it does not work. It saves the cache, and even loads it again correctly. But docker-compose just builds everything from scratch again. Since the loading of cache is not done on the first build, the second build is actually slower than the first!

---
version: 2
jobs:
  generate_cache:
    machine: true
    steps:
      - checkout
      - restore_cache:
          key: docker-{{ .Branch }}-{{ checksum "docker-compose.yml" }}
          paths: ~/caches/images.tar

  - run:
      command: |
        set +o pipefail
        docker load -i ~/caches/images.tar | true

  - run: pip install docker-compose
  - run: docker-compose pull
  - run: docker-compose build
  - run:
      command: |
        mkdir -p ~/caches
        docker save $(docker images -q) -o ~/caches/images.tar
  - save_cache:
      key: docker-{{ .Branch }}-{{ checksum "docker-compose.yml" }}
      paths: ~/caches/images.tar
1 Like

I have the same question…

Just had the same question today. Any ideas on how to do that?

I actually didn’t know one could build with Docker Compose, @JCM - learnt something new! :smiley:

The way I do this is to treat each image as a separate project/repo, which has its own build process on CircleCI, and then pushes to an image registry. Then there is no need to build in Compose - you just pull your separate images, and then run and test as required.

Of course, there is the delay of pulling images, but Circle will be on a fast network. I pull 948M of data in ~10 images and this takes ~57 seconds, which I regard as pretty good. I could tune those to lose a few MB if required, but I’m well within my build allowances at present, so there’s no major need.

That does not work if you use docker compose to test your own application. We use it because to test our build process we depend on setting a production like environment, with redis, mongodb and the application code. Docker Compose is the easiest way to do that.

It does work, since I use Docker Compose for my integration tests, and I do indeed do it as I have described. Can you expand on what you’d need to know in order to do that, or what detail I have missed that would make separate builds and then a final (integration) test in a DC environment unsuitable?

I found a premium feature ‘docker layer caching’ here: https://circleci.com/docs/2.0/docker-layer-caching/#dockerfile

but will it work also for docker-compose?

when enabled will it use speed up this command?

docker-compose -f docker-compose.yml pull

When I think of caching in this context, I mean caching built layers, such that something that is built does not have to be built again. To clarify, that is what I meant in my messages up-thread. However, it sounds like you mean that when pulling from a remote registry, it is fetched locally if it has been seen before.

I don’t know how the premium feature works, or whether it can do this. The (free) cache system might work, though I am not sure what cache key you would use - the ideal would be to cache each layer with its sha256 code, but that’s not really possible, since you’d need to put the appropriate number of cache statements into your file, one per layer (!). If this would work all, you’d to cache a whole image, and then make its layers available in Docker to be used as already-fetched layers.

FWIW, I fetch from a remote registry (free GitLab); it is much quicker than building, so I don’t bother caching fetching. I estimate my fetch speeds from GitLab are around 1G/min, and my images total around 900M, so it is around 50 sec for me. If you have a long delay here, it may be worth seeing if you can trim your images down.

Hi halfer,
thank you for your reply :slightly_smiling_face:

Yes it’s about fetching from remote registry too in our case. Maybe it’s just nice to have feature, but speeding it put could be worth to try.

We want to run integration tests for one of our microservice which require other micro-services to be run too.

Let’s say we have 3 micro-services A, B, C.

And we want to test microservice A (we have circleCI enabled for repository of microservice A).

Then our docker-compose.yml (which is executed on circleCI) downloads images of micro-services B and C from Google Cloud container registry.

Each image is based on node:carbon and contains just source code of particular microservice. It takes one minute to pull those images every time (if caching can trim this time, that would be perfect).

Maybe it would be possible to use docker-compose.yml in a hash key?
If we use semantic versioning of a microservice in the docker-compose.yml e.g. microservice-A:1.3.0

Then last thing I don’t know yet is which folder to cache (has to be the one which contains downloaded images)

This topic was automatically closed 166 days after the last reply. New replies are no longer allowed.