I have an old rails application that I’m trying to dockerize that has 300+ environment variables.
I’m trying desperately to avoid having to add 300 Circle CI env vars for each env (staging and dev) and then pass all 300 into my dockerfile and have to specify 300 ARGs and 300 ENVs inside the dockerfile. Some subset of these are needed in order to init the environment in order to run rake assets:precompile. I would like these to be precompiled on circle.
The issue is that there’s no way to pass all these variables into the build step of docker easily.
So my “solution” is to build the image, then use docker run to start the container, execute assets:precompile, exit the container, and then commit the container changes to a new image. In theory this should work well and I’ve been able to get the command below to work when I’m SSH’ed into the circle machine, but not when they are run in my pipeline.
My config.yml file looks like this:
name: Start Container Precompile Assets Commit New Image
docker run --name=my-container --env-file=somanyvars.env my-image:my-tag tail -f /dev/null &
docker exec my-container bundle check
docker commit my-container my-new-image:my-tag
docker stop my-container
And the error I’m getting is “Error: No such container: my-container” on the second line of the above.
What it looks like is that I can’t run the container in the background (or if I can it immediately exits). I’ve tried numerous incantations including pipelining with && between each command and it just seems like the docker container I spin up is never available.
Note, previously instead of naming my container I was using
docker ps -lq to get the hash of the last run container, but during my build a /different/ container was returned as the last run container than the one I spun up, so I switched to trying to name the container.
I’m a bit lost. I’m not tied to this approach, but I’m 99.99% against manually declaring 300 env vars all over the place and this seems like it should work (as it does locally).