I currently use the docker execution type. Within, I download docker and docker-compose and my steps looks like this: docker-compose -f docker-compose.ci.yml run pytests.
Because of the lack of volume mounts I have to do clever tricks to get artifacts out of containers. It also means I have to use a dedicated docker-compose yml file that doesn’t have volume mounts. This file is thus different from the ones I use when doing local dev with docker-compose.
I could switch to machine type and still do the same docker-compose commands. That would immediately give me volume mounts which makes it really easy to ship artifacts (e.g. test results xml files). It also makes it easier to store test result artifacts.
We did that on this project and it made the whole thing easier. And the bonus was that I can now use the (one) same docker-compose.yml file for local dev as in my .circleci/config.yml.
What I don’t know is the risks, costs, and performance doing this. And this I’d love to hear from people in the community (and CircleCI employee experts).
For example; the documentation is clear about the fact the docker starts instantly but machine type can take up to 60 seconds to start. However, I’ve found it to often be much faster than that.
Another risk with machine type is that it’s not clear which Linux system I’m using. With docker you specify something like ubuntu:18.04 as the container that builds the Docker containers.
You don’t, yet. The docs say that Machine might become a premium feature, but it has been a free option for around a year, if I recall correctly. I would expect that if the price were to change, CircleCI would give some months advance notice.
Use volume containers (i.e. non-running containers) which seem to work still in Docker Compose (at least they work for me, but I’ve only tried read-only for now)
?? That won’t work unless you get the files out of the docker containers.
That’s what we had to do. We keep a container that always runs so if any docker-compose command fails, we can extract the files from a different container with docker cp in the end.
This wasn’t pretty. Plain old volume mounts would make this unnecessary.
Sure, I’ve bumped into this too. For what it’s worth, my guess is that this is a Docker limitation, not a CircleCI one. The issue, I think, is that Docker volumes between running containers don’t work across machines, and CircleCI’s infrastructure spins up containers on arbitrary machines, in order to handle the workload.
I dare say it could be “fixed” by CircleCI by setting up some form of network-based filing system, but I don’t know how much work that is. The dilemma they have is one of perception: most people’s experience of Docker volumes are on a single machine, so users reckon they “should” work
If you fixed it, and the result is reliable, that’s a win, I reckon. If you move to {other CI provider} then you’d have to hack something else.
So that’s something that’s on my mind. If I use machine type and plain old docker-compose the config.yml of some other CI would look pretty much the same. The config.yml file would look very much like a README file about how to run tests, with docker-compose locally on your laptop.
By using machine and docker-compose, apart from the custom store_artifacts is completely CircleCI-agnostic.
Yep! I do something similar, but with the Docker executor rather than Machine - I think the latter forces your containers onto a single VM, which is why volumes work.
From a CI provider’s perspective, Docker is going to be more scalable, since the 4G RAM limit is not taken up by most users. However, I’d guess that Machine reserves that memory per use, making it more expensive for CircleCI to run - which is why I guess a price change might happen.
I guess it’d be ideal if you can figure out a good-enough solution for the volumes thing, so that if the price change happens and you don’t like it, you can fall back to Docker.