DRY way to do multi-env build & deploy jobs?



I work with Node & I have two different environments that I want to deploy to: staging & production.

The workflow should be:

Every push to master -> build -> deploy to staging
Every v* tag -> build -> deploy to production

My build job is just running tests.
My existing deploy script is designed to leverage all the existing node tooling, so I would want to run it with the exact same environment & dependencies as I ran it with my tests.

Ultimately, after a master build I want to run the command yarn deploy staging and after a staging build I want to run yarn deploy production.

My build task looks like this:

    working_directory: ~/TryGhost/my-repo
      - image: circleci/node:6.13.0
      - checkout
      - yarn
      - yarn test

My workflows need to look something like:

  version: 2
      - deploy:
         requires: build
              only: master
        - deploy:
          requires: build
              only: /^v.*/

However, the only thing I can see to do is to create 2x deploy jobs which are identical to the build job but swap yarn test for yarn deploy x.

That means having 3x duplication of the docker setup, checkout, dependency install, etc.

Am I missing something?


My first thought was YAML references, which allow a DRY way to specify parts of a YAML structure. They’ve been mentioned a few times before here.

Also, if that’s not flexible enough for you, you could use a scripting language of your choice to create the data structure of a YAML file, and then write it out to the .circleci/config.yml target using a YAML writer library. That would allow you to make the source file (of whatever format) as DRY as you like.


Hmm… ok so the YAML references solve the config duplication.

But what about the duplicate work of having to setup docker, checkout, and install dependencies for both the build and deploy task?

Is there a way to keep the container around for the next job in the workflow?


Not that I know of as such, but Workspaces might do what you’re looking for - they let you pass data on down through the pipeline by sharing a directory hierarchy across jobs.


I can’t help too much on that, though @jws’ suggestion sounds good. My own work on Circle is based on microservices, so I build everything separately, and then pull the resulting images in from a registry when I want to do something with them (integration testing, deployment, etc).


This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.