I’m trying to figure out if it is possible to access secondary images/containers from a remote docker execution.
The background is that I am using docker run
to execute tests in my docker artifact, as a final step to verify that the image was built correctly (compared to just running the tests in the circleci image). Now, my tests depend on a local DynamoDB container which works fine with the dual image setup of a regular build configuration:
test:
environment:
DYNAMODB_ENDPOINT: http://localhost:8000
docker:
- image: circleci/node:12
- image: circleci/dynamodb
steps:
- checkout
- run: npm ci
- run: npm test
The secondary image starts a container and is accessible as localhost, so far so good.
Now, what I do during deployment is that I build a docker image using setup_remote_docker
, and before pushing it I want to run the same tests again:
build_deploy:
environment:
DYNAMODB_ENDPOINT: http://host.docker.internal:8000
docker:
- image: circleci/python
- image: circleci/dynamodb
steps:
- aws-cli/install
- checkout
- setup_remote_docker
- run: docker run -e DYNAMODB_ENDPOINT -it --rm the_built_tag /bin/sh -c "npm install && npm test"
The problem here is I can’t find a way to actually access the host of the initial runner (the circleci/python
container), or actually the secondary container (the circleci/dynamodb
based one). I’ve tried with host.docker.internal
which doesn’t even seem to resolve, localhost
is completely wrong and --network="host"
makes no difference here.
Is this even resolvable? My second option is to actually spin up the dynamodb with docker run
as well and link the containers, but I thought that using the multi-image support in circleci would be much cleaner.
A lot of people seem to have problem with the other way around (accessing the container that was spun up remotely from the primary container), but I’m trying to do the opposite, sort of.