Your compiled config is too large to be processed. Please contact support

Hi everyone,

So we are using a monorepo and encountered this error in our pipeline:

Your compiled config is too large to be processed. Please contact support., and the only link that I found regarding this issue is this one: Maximum Size of a CircleCI Configuration (config.yml).

does anybody know if the limit size (3MB) can be increased or not? Also, does anybody have any general tips to avoid this problem, or to debug it? I’ve tried to decrease the number of jobs in our repo pipeline (we also use the dynamic config feature)

Thanks a lot for your help!

Can you just confirm that you have a single config.yml file that has reached 3MB in size!

If so I think most ideas put forward will be to break your process into a number of smaller processes.

Thanks for your response, tbh I don’t know the exact (compiled) size on the config itself, will be helpful if there’s a debug log so at least I can pinpoint which job consumes the most size when compiled.

Also is there some way that we can increase the compiled config size limit?

The best solution may be to open a support ticket via

Even on the ‘free’ service level I have found that I can get feedback via the ticket system within a few working days and with more detail than here on the general forums as the support team can view account details.

I see, will try to create a new ticket there, thanks for the pointer!

Have you received any response about your issue? I have the same issue, I created a support request too but maybe you have some information about the possible issue.

yep they already ack’d the problem and said they’re able to reproduce it, still waiting for their next response for the actual root cause

I’ve experienced this at the day job and have the following tips:

If we assume that the 3MB limit is a hard limit - which from my understanding is - you can do some estimation / guessing on the expanded size of your pipeline by using the circleci CLI’s process command which yes OP already found but here’s some more data.

There’s a number of things about this command:

  1. If you’re using private orbs you need to specify the --org-slug of the organization hosting the orb (so github/rwilcoxor bitbucket/foobar-org let’s say). Thank you conversation on this very forum.
  2. In this scenario (private orb) you’ll also need to provide a value for --token.
  3. If you’re using the continuation orb you’ll also need to provide the pipeline parameters you would pass to the continuation orb. See below for more information about why
  4. circleci config process doesn’t actually do any magic, it seems to make a web API request to some Circle API, thus if your config is too big it will simply spit out the same error Circle itself does.
  5. If you just get a YAML file with a bunch of commented out “original YAML” you’re doing it wrong, perhaps by not providing pipeline parameters.

A sample command might be:
circleci config process --org-slug github/rwilcox --token $MY_CIRCLE_TOKEN --pipeline-parameters /tmp/.circleci/output_path_from_a_previous_command.json .circleci/pipeline.yml

Now, although this command fails in the exceptional case, it can be used for monitoring or understanding what Circle is actually doing in the happy path at least, and where you can trim your configuration. I highly suggest saving the result of this command as an artifact so you can view it in a good desktop YAML tool.

Now, the resulting YAML is super interesting to look at. It has the following characteristics:

  1. Orb commands are expanded inline to their bare run statement parts. Thus moving an oft-used command into a public or private orb will have no real affect
  2. Given the output this looks true for commands you’ve defined in your pipelines: DRYing up your YAML is great for developer experience but doesn’t help here
  3. parameter blocks and description keys are not preserved. Be kind to humans, it doesn’t cost!
  4. Circle does do some level of pre-processing, excluding YAML blocks if a when expression would evaluate to false. Aka false conditional workflows don’t seem to count against the size limit. This seems to be true, but this is from us doing experimental / observational science (potentially badly). SCIENCE!

Things we did to reduce our pipeline size:

  1. Use when statements - or some other form of pre-processing - to avoid submitting code to Circle you don’t need to run, if you can.
  2. An orb may bring a whole shell script into the pipeline. This is normally great as it’s an easy way to bring functionality into the pipeline without needing shell scripts to exist on the file system. HOWEVER, if you commonly use an orb command that’s bringing 50+ lines of shell into your pipeline every time… well there goes a chunk of your budget.

Some things that we tried that probably didn’t end up helping:

  1. Refactor pre and post steps to be normal steps, where we could. This actually did nothing. (Because they would be compiled to regular steps anyway)
  2. Maybe the test splitting and parallelism features in Circle 2.0 format would work to reduce size. Our problem would require a bit of refactoring to take advantage of this, but we may just try harder.
  3. While when statements drop out parts of the pipeline that are false, filter statements don’t seem to: you still seem to be paying that cost.

Very interesting take away for the folks that have been with Circle for a long time: It looks like Circle 2.1 syntax just “compiles down” to Circle 2.0. As a relatively new Circle user learning from some experienced hands I’m not entirely sure exactly what the experts can do with this information, but here you are.

Hope this helps. I do wish the error message from Circle would at least include the rendered size (something like “config too big: allowed 3MB submitted 25MB”) so you know how much work you have ahead of you. For right now it’s a bit of a guessing game - based on stuff that doesn’t fail - to how much your larger pipelines are failing by. But config process helps some, assuming you’re holding it right and all that (which is slightly trickier to do right than it would seem).

Good luck

this gives command to “unwrap” the yaml and show the size
sadly even we wrote custom job and use it like 50 times , it being “unwrapped” and count for multiple times in size

I’m getting this error as well, but my file is pretty slim:

>cat compiled.yml|wc -l
> du -sch compiled.yml
 68K    compiled.yml
 68K    total

We’re using dynamic configuration. Maybe it has something to do with it.

There must be some sort of limit that triggers this. I opened a ticket to support but no reply yet. Any ideas what else might be affecting this limit?

ps. I opened a ticket with support, waiting their reply.