Your compiled config is too large to be processed. Please contact support

I’ve experienced this at the day job and have the following tips:

If we assume that the 3MB limit is a hard limit - which from my understanding is - you can do some estimation / guessing on the expanded size of your pipeline by using the circleci CLI’s process command which yes OP already found but here’s some more data.

There’s a number of things about this command:

  1. If you’re using private orbs you need to specify the --org-slug of the organization hosting the orb (so github/rwilcoxor bitbucket/foobar-org let’s say). Thank you conversation on this very forum.
  2. In this scenario (private orb) you’ll also need to provide a value for --token.
  3. If you’re using the continuation orb you’ll also need to provide the pipeline parameters you would pass to the continuation orb. See below for more information about why
  4. circleci config process doesn’t actually do any magic, it seems to make a web API request to some Circle API, thus if your config is too big it will simply spit out the same error Circle itself does.
  5. If you just get a YAML file with a bunch of commented out “original YAML” you’re doing it wrong, perhaps by not providing pipeline parameters.

A sample command might be:
circleci config process --org-slug github/rwilcox --token $MY_CIRCLE_TOKEN --pipeline-parameters /tmp/.circleci/output_path_from_a_previous_command.json .circleci/pipeline.yml

Now, although this command fails in the exceptional case, it can be used for monitoring or understanding what Circle is actually doing in the happy path at least, and where you can trim your configuration. I highly suggest saving the result of this command as an artifact so you can view it in a good desktop YAML tool.

Now, the resulting YAML is super interesting to look at. It has the following characteristics:

  1. Orb commands are expanded inline to their bare run statement parts. Thus moving an oft-used command into a public or private orb will have no real affect
  2. Given the output this looks true for commands you’ve defined in your pipelines: DRYing up your YAML is great for developer experience but doesn’t help here
  3. parameter blocks and description keys are not preserved. Be kind to humans, it doesn’t cost!
  4. Circle does do some level of pre-processing, excluding YAML blocks if a when expression would evaluate to false. Aka false conditional workflows don’t seem to count against the size limit. This seems to be true, but this is from us doing experimental / observational science (potentially badly). SCIENCE!

Things we did to reduce our pipeline size:

  1. Use when statements - or some other form of pre-processing - to avoid submitting code to Circle you don’t need to run, if you can.
  2. An orb may bring a whole shell script into the pipeline. This is normally great as it’s an easy way to bring functionality into the pipeline without needing shell scripts to exist on the file system. HOWEVER, if you commonly use an orb command that’s bringing 50+ lines of shell into your pipeline every time… well there goes a chunk of your budget.

Some things that we tried that probably didn’t end up helping:

  1. Refactor pre and post steps to be normal steps, where we could. This actually did nothing. (Because they would be compiled to regular steps anyway)
  2. Maybe the test splitting and parallelism features in Circle 2.0 format would work to reduce size. Our problem would require a bit of refactoring to take advantage of this, but we may just try harder.
  3. While when statements drop out parts of the pipeline that are false, filter statements don’t seem to: you still seem to be paying that cost.

Very interesting take away for the folks that have been with Circle for a long time: It looks like Circle 2.1 syntax just “compiles down” to Circle 2.0. As a relatively new Circle user learning from some experienced hands I’m not entirely sure exactly what the experts can do with this information, but here you are.

Hope this helps. I do wish the error message from Circle would at least include the rendered size (something like “config too big: allowed 3MB submitted 25MB”) so you know how much work you have ahead of you. For right now it’s a bit of a guessing game - based on stuff that doesn’t fail - to how much your larger pipelines are failing by. But config process helps some, assuming you’re holding it right and all that (which is slightly trickier to do right than it would seem).

Good luck