Saving cache stopped working ("Warning: skipping this step: disabled in configuration")


#1

Saving cache was working, but today I believe without changing anything we keep getting:

Warning: skipping this step: disabled in configuration

See e.g.

https://circleci.com/gh/cBioPortal/cbioportal-frontend/13491?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link

Any clue?

Thanks!


#2

For us it broken for external pull requests, while still working for our own commits.


#3

This is related to a recent security change which prevents forked PRs from saving cache, but they can still restore to it.


#4

https://github.com/mui-org/material-ui is impacted by this issue too. Same observation as @bahmutov. It’s OK for private repositories but all broken for Open Source contributors.

How should we work around the problem? I think that this regression started ~48h ago. We are stuck.


#5

I’m running into the same thing on https://github.com/badges/shields. I appreciate the security improvement, though I also need to find a workaround.

Here’s what I’ve come up with so far:

Here’s a feature suggestion: what about allowing workspace-specific caches? If the cache key includes a workspace ID placeholder, then you can cache it, even on forks?


#6

Can you just comment out all loading and saving of caches? I imagine your builds will run slowly without it, but they will still work.


#7

It’s an option. Thank you for suggesting it. I’m implementing it : https://github.com/mui-org/material-ui/pull/12573, but this makes me sad. CircleCI repositories are impacted too: https://github.com/circleci/circleci-docs/blob/master/.circleci/config.yml.


#8

This is effecting three of our open source projects suddenly causing every build to fail, because we are utilizing caches to preserve pipenv virtualenvs between jobs in a workflow.

Is there a recommended work-around? What is the canonical way to cache dependencies in this manner with the current public API?

Thanks for all your hard work!


#9

This does sound like a bug with caches, but if you want to preserve information between jobs in a workflow, I think you should be using workspaces anyway. Could it be worth switching to that to get you rolling again?


#10

No where does it officially says its a security issue. what is the security issue here???

Its also saying that its disabled by configuration - is there a way to enable it? I have over 20 projects that are used internally and our workflow in our org the developers are forking the projects (private repo’s).

Is there a way to lock a circle ci version? I’m on the verge of leaving circle ci - its completely not o.k to even send a warning email. if not this post I would have never find the issue.


#11

While I understand why this may be a security issue since the cache data may persist across different executions of the workflow, there should be a way to whitelist this feature, either to everyone, to a Github organization members or to a specific set of members/groups (as provided by some Jenkins plugins).

Working with workspaces can solve the issue, but the performance hit (which is what made us prefer CircleCI over its competitors in the first place) is non-negligible.


#12

“Warning: skipping this step: disabled in configuration”… but I don’t see a configuration to enable it. Any advice, other than having to switch to using workspaces to save node_modules?


#13

Hi @Cormac - can you shed any light on this change? I don’t need it, but there’s a few people in the queue above!


#14

Supporting workspace-specific caches, where a workspace ID placeholder is added to the cache key, would be another way to support this while maintaining good security.


#15

Is there any workaround for this? Can we disable this lack of saving in our configuration yet?


#16

At minimum, this “security fix” should be in the documentation so it isn’t a complete mystery until one lands on this thread - https://circleci.com/docs/2.0/concepts/#cache . Additionally, why not make the error message clear? “Disabled in configuration” is actively misleading, rather than something like “Build started from branch of fork, skipping step”.

I understand the security implications for public repos, but this has broken our private repo. It is a reasonable assumption that the admins of a private repo have only given fork access to trusted parties.

I echo AmiM’s frustration with the fact that this change is one that would obviously break many customers’ builds and was not properly communicated or documented (much less workarounds and best practices provided to affected customers).


#17

What’s really frustrating us is that it’s disabled for our private repo. We’re already using github to whitelist who can make PRs against our repo. This additional step is slowing down our team’s builds significantly. Is there any hidden option we can add to our config.yml file to disable this security check?


#18

We ended up just checking out the code for each job in out workflow. It takes longer, but is a quick fix. I’ve discovered the Workspaces API - but I am having a hard time understanding if they are an idea use case for caching dependencies. Does the workspace payload have to be sent to S3, then back in-between jobs?


#19

Dear Employees,

We NEED an option to enable this through the admin settings! You may even choose to have it disabled by default but this is a MUST - you can’t disable cache on forks completely. Why would you do that? What are the security concerns? How did this even make it into PROD without a switch in the admin panel?!!!


#20

To avoid further unnecessary pile-on in the comments: please remember that CircleCI employees are people first, employees second. As a fellow user of CircleCI, please allow me to make the request to everyone in this thread to address everyone with respect.

Of course, as an engineer, I’ve had builds stop working that were not my fault, and I agree that such events can be immensely frustrating. However, as an engineer, I’ve also received hostile and angry communications from frustrated customers, and I recall very clearly that those messages did not help me fix the problem any faster.