And my tests were able to run successfully. Today (10/31) and the container names have been appended with a random identifier, making it impossible to connect to them:
I’m using the Docker executor. It’s worked for months but the behaviour just changed overnight - literally ran my last build about 8pm Eastern and then tonight it failed because of the container names; here’s the snipped of code
OK. Have a look to see whether your version of Docker or Docker Compose changed. If you re-run the one that used to work, does that still work? I wonder if an upstream change has broken things for you.
That could come from a CircleCI change in their convenience image, but equally it could come upstream from them, and there is not much they can do about that. I have installed a specific version of DC using Pip, on a bare-bones Docker image, and it has been rock-solid for ~12 months (daily builds).
I was finally able to fix this by doing a workaround - basically looking at the docker ps output to grab the name of the container:
docker-compose -f vendor/docker/docker-compose-cicd.yml up -d
webapp=$(docker ps -a --format '{{.Names}}' | grep "docker_cowbull_webapp_1_")
echo "export webapp=$webapp" >> $BASH_ENV
The tests have been running regularly (at least once a week) for 3 months and I can’t figure out why this broke; however, the cludge code above at least gets the tests running again. The cause appears to be the difference in naming standards (appending a random identifier) which started last week.
PS. the last line echo "export webapp=$webapp" >> $BASH_ENV ensures that the env. var. with the container name is passed to the next step.
Hopefully this might help if anyone else runs in to this.