I can’t wrap my head around the concept of having multiple images in the same executor. Is there somewhere in the docs that explains what functionality this enables / doesn’t enable?
Or to put it another way, what’s the point of it?
From reading this blog post, I understand it’s possible for one container to communicate with another by exposing a port + executing REST api requests etc, but it doesn’t seem to suggest doing that in the docs.
I’d like to be able to access the binaries in one (secondary) container from the primary one; is that possible?
e.g. at the moment I am building a Wordpress theme which compiles some CSS & JS. I’m using the CircleCI PHP image, and I’d like to install packages with a certain version of node.
So, I thought it might be possible using the following syntax:
A common use-case for multiple Docker images would be to spin up a Postgres DB container that needs to be available for certain tests. Multiple Docker images are most useful when you need to connect to a resource over the network.
If you are looking to install some CSS/JS assets with node on a PHP app, your best bet would probably be to make use of the -node convenience images and something like circleci/php:7.4-cli-node
That will give you PHP with a node executable available to compile your assets. If you need a specific version of node, its probably simplest to make use of the node Orb to install a different version as opposed to using the -node image variant
There is an old blog post that talks about how to access resources in a secondary container and speaks of the hoops you would need to jump through to do that
So to sum things up, multiple Docker containers are most useful when you want to access a network resource like a database for your builds. If you need access to something additional, like node, you usually want to simply install it during the build.