Passing environment variables to docker containers

I’d like to pass an API key to a docker container, but I can’t figure out if there is a way to do it. Obviously I don’t want to keep the key in my source files.

jobs:
    test_e2e:
        docker:
            - image: circleci/node:12.16.0
            - image: ourCustomImage:latest
              command: sh -c "API_KEY=${API_KEY} node app ."
              environment:
                  API_KEY: $API_KEY

The problem is that the environment variables cannot be used in the environment, entrypoint or command attributes. Or am I missing a trick?

I found similar post Passing dynamic environment variable to a secondary docker container which offers zero answers. But it’s a year old, so I’m trying my luck.

Thanks in advance!

2 Likes

If env var interpolation still does not work for entrypoints and commands, you could try abandoning secondary Docker containers in the CircleCI infra, and spin up your images in Docker Compose inside your build container. I presume that you will have more control over the env vars you inject into the containers that way.

Not to beat a dead horse. Was any progress made in this regard? I can go the docker-compose route but not being able mapping the ports to the host makes it tricky to use in my case.

Thank you!

I see that the feature does work. I created a simple test case where I am able to pass an environment variable from Circle CI into a container. Circle CI masks the value in the output, but I can confirm that before I actually had the Circle CI project environment variable set, the output was “$foo” because no environment variable was defined. https://github.com/buchs/circle-ci-test/blob/main/.circleci/config.yml ← source repo. CI job output:
Screen Shot 2021-05-12 at 4.00.56 PM

In the event the repo goes away, here is the config.yml:

---
jobs:
  build:
    docker:
      - image: alpine
    environment:
      FOO: $foo
   steps:
      - run: sh -c env

Thank you Kevin!

I extended your simple test example to demonstrate where it fails to work in my case. Here is the PR with a bit of description. chore: test executor by dmi3y · Pull Request #1 · buchs/circle-ci-test · GitHub

Hopefully would be useful to understand the use-case.

I spent time on this and ended up using this:

Inside of CircleCI config.yaml:

docker build -f Dockerfile --build-arg FOO_VERSION="$(./foo_echo_version_script)" -t $(pwd | xargs basename):latest .

Then in my Dockerfile:

ARG FOO_VERSION
ENV MY_FOO_VERSION ${FOO_VERSION}

Now MY_FOO_VERSION was available inside my container at run time.

1 Like

I’ve seen many examples of doing this with a single env variable, but has anyone seen an example were an entire .env file, with 50+ lines of variables can be passed into the build container?

I don’t want to manage this via CircleCI web UI for Env or Context. I just want to be able to keep a set of .env files local and push up (securely) when they are needed to execute the CI build.

1 Like

In many ways, you are looking for a secrets manager that can sync values with a CircleCI project via the CircleCI API. I’m not sure how many products are out there that do this, but the service from doppler.com has been able to handle my needs.

Thanks for the tip. I spent the weekend checking it out. A lot of promise there, a few bumps in the road, but I think I can make Doppler work as a solution.

Oh! boy glad it works, but also feels wrong.
why cant we just pass --env

The limitation is docker, which does not really allow a docker environment to be defined within a docker container.

So CircleCI’s solution is to create an environment where all the containers are running before the config.yml file takes control. This then has a major limitation in regards to the environment variables that can be passed to each container.

The alternative is to just use a standard machine instance, in which you can then do what you want with docker containers, but the config.yml file is then executed within the machine space, not a chosen container.

CircleCI seems to have switched its focus to providing better integration with k8s as that does not have the same limitations. As I’m docker focussed I can not comment on how good the k8s environment is.