Getting configuration into a secondary container

TL;DR. I’m trying to get some configuration into a secondary container. How do I do that?

The application under test is a front-end app in React. I’m running e2e tests, so the app under test needs a back-end API application to talk to. This back-end API app (which I have full control of) is wrapped in a container and ran alongside my main front-end application, as a secondary container.

I found myself unable to make my API back-end app run successfully in a secondary container: the app needs a bunch of ENV variables to be present, and I cannot wrap my head around to how set them via .circleci/config.yml.

As far as I see it, there are several ways I could set ENV variables on the secondary container:

  1. “embed” the variable into API container directly, then push it to registry. This is generally percieved as a bad practice: if docker image leaks, the ENV “secrets” will leak with it. The secrets I am talking about here are dev/test keys to various 3rd party services (Stripe, Twilio, etc),

    Alternatively, keep .env file together with source code.

    This option is obviously not an option.

  2. update API app code to read secrets from a file rather than ENV. This implies:

    1. put all secrets in, let’s say .env file, encrypt it and store it within API source repository,
    2. during the CircleCI build, using a secret password from CircleCI project settings, decrypt the file inside API app container.

    To go with this option I am willing to update API code, however I am missing this: how do I tell my API app "before you start, please take this secret password and perform .env decryption?

  3. Add .env verbatim to my front-end app’s CircleCI project settings. This way there should be no need to decrypt it at all, e.g. I can rely on CircleCI fully.

    This effectively means going with item two from “Storing secret files (certs, etc.)” help article.

    However I am not sure I understand the following: how do I pass the .env file inside my secondary container with API app before it starts? Like, i there a way I could mount the file to the yet-to-be-started container?

I have a feelings that passing sensitive configuration to secondary container must be a solved problem by now, and I must be going circles. How’s everyone else is dealing with passing config to secondary containers?

P.S. a very similar thing was asked in “Passing dynamic environment variable to a secondary docker container” on this forum, and there’s “Allow passing of context or other shared env vars into secondary docker images in a job” feature request that awaits for more votes.

Someone else in another thread pointed out that it’s possible to go back to machine build

I got around it by using the machine executor and running the docker commands manually.

However, my pipeline is quite big, I just don’t feel like highjacking it with changing every step to docker run .... I am also afraid that making my workflow rely on machine: true will cause a significant slowdown.

You can customise the command and the entrypoint for secondary containers - and then in that container, you can read the console args in the usual way for a Linux command.