I have a workflow configured with parameters that get populated from a context. Basically AWS cli key/secrets. I want to run that workflow against my environments, the only thing that changes is which keys it uses, dev/stg/perf/prod as an example.
Is there a way to use a single workflow and dynamically configure with keys to use based on a passed in pipeline parameter?
Not within the current CircleCI environment as ‘Contexts’ are injected into a workflow right at the start, rather than there being any way to select a named context when the workflow is executed.
I had the same issue at the start of our process design as our target list is long and dynamic in nature. As an example, I currently have the following target list dev1, dev2, dev3, test1, test2, test3, sandbox1, int1, demo, staging, prod1, and prod2 for 3 of our projects each of which is parameter driven (to the extreme) as my background in ‘infrastructure as code based’, with additional security roles.
My solution was to place all parameters over at a service called Doppler, which is a key/value pair storage solution. So each project could have a controlled set of key/value pairs with a lot of operational control on which systems could access what at build time.
The result is something that is overly complicated for a single project but works well if it is to be a foundation of the ops side of the IT (it also drives system deployment in-house and at AWS).
My resulting config.yml files have basically a switch statement that defines the right access key to Doppler based on an incoming git tag, but a pipeline parameter would also work. This access key is then passed around all the jobs and commands so that key values can be accessed from within shell scripts using a CLI provided by Doppler.
The easiest thing I think is to give a basic example, below is the config.yml command I now use to login into docker
docker_login:
parameters:
doppler_token:
type: string
steps:
- run:
name: docker login
command: |
DOPPLER_CONFIG="" DOPPLER_ENVIRONMENT="" DOPPLER_PROJECT=""
echo "<<parameters.doppler_token>>" | doppler configure set token --scope / --no-check-version
DOCKER_USER=$(doppler secrets get --silent --plain DOCKER_USER)
DOCKER_PASSWORD=$(doppler secrets get --silent --plain DOCKER_PASSWORD)
docker login --username $DOCKER_USER --password $DOCKER_PASSWORD
So
- The command is passed the token needed to access the right set of values held in Doppler
- I clear down all the Doppler environment vars just to be sure
- Using the Doppler CLI I login to Doppler
- I retrieve the user and password values held for docker from Doppler
- I use the values as you would expect.
thanks for the response. I’ll have to see what makes sense for me
I was wondering if I could do something with dynamic configs where I can check the passed env tag, the update a “template” and just sed out the parameters I’m grabbing from context and then launch that config
might be more effort than it’s worth but maybe not
I started with something like that, but I was trying to manage far too many key pairs to make it maintainable as the deployment step has 142 values alone, hence my rather complicated/advanced solution. If you have good shell/sed skills there is a lot you can do with the right input stream.
Doppler also has the ability to operate a hierarchical structure so I can have a master config with all the defaults and then for each target I can just make the required changes.
Dynamic configs offer a lot of flexibility as you can hide all of the config logic within the first .yml file while also being able to promote environment variables to parameters for better processing by the call .yml file. One of my future refactoring tasks will most likely be a move to dynamic configs so the overly long ‘switch’ statement ends up in the first .yml file and so better separate operational/environmental steps from build/deployment steps. This should then provide better separation between OPS and DEV tasks.
It is also worth noting that my choice of configuration was made at the start of this year. At the same time CircleCI had a security issue and so was making as clear as they could to all customers that they needed to change all secret values stored within CircleCI and any repos that CircleCI could access. So I invested more time than I many have done normally into placing such values into Doppler.
So based on the context and the existing followup, I’m guessing you don’t just want to run the same workflow N times with a different context specified for each?
If that doesn’t work for you, one other potential option that I’ve done before is to use a single context (or include all the contexts in a single workflow) and then dereference the env var in a shell script. This has the downside of all env vars for all contexts being present, but may help if that’s what you’re trying to do.
For example, something like:
# So in this case, you'd have `$FOO_BAR_DEV` and `$FOO_BAR_PROD` etc.
# as available env vars.
VAR_NAME="FOO_BAR_${ENV}"
# Now dereference that to set `$FOO_BAR` to the value of `$FOO_BAR_${ENV}`
echo "export FOO_BAR=\$$VAR_NAME" >> "$BASH_ENV"
where $FOO_BAR
is the environment variable name you want to be populated with the appropriate credentials.
IMO, the other approach (reuse the same “job” or orb step, but define a workflow for each separately) is probably preferable in most cases, even if it is a little less DRY, because you can still make it pretty DRY, and it will be easier to read, plus you have a bit better isolation of your credentials.
Also, for AWS, I would suggest switching to OIDC if you can - that should much more easily allow not needing to have per env keys, and I think will make it easier to do what you’re trying to do, as well as just be a lot safer overall.
So these credentials all exist within the same context. The only difference between workflows is 3 parameters
So I have
jobs:
- jobs_name:
context:
- context_name
aws-access-key-id: $ENV_AWS_ACCESS_KEY_ID
aws-secret-access-key: $ENV_AWS_ACCESS_SECRET
env: $ENV
For now $ENV is hardcoded into each job within the workflow, with 4 environments I have 4 workflows which are identical except for credentials. As something changes I need to update multiple spots or educate others on that, I’d like to reduce that risk.
If the variation between configurations is only 3 values over 4 environments the idea put forward by @wyardley would make the most sense, my solution is built around the need to handle 100’s of values across 10+ environments.
You would end up with a single context that defines the 3 values 4 times with the naming of each value including the target environment that can be used for all 4 workflows.
You would then just select the correct value with a switch like structure within the workflow section of your config.yml based on something a parameter or tag. Or as @wyardley showed in his example you could do much of the selection within shell scripts using a global environment variable that is set based on a parameter or tag.
Such a configuration will mean that all the values you need to maintain will be held in just one context and so no duplication, with just one downside of the current values not being visible.