Cache path are absolute, breaking cache restores between executors

Cache path are treated as semi-absolute, not relative as mentioned in the docs. This means one cannot restore caches from the “machine” executor in a “docker” executor.
This was reported here before by another user: Unable to restore cache saved in Docker to machine type job

config.yaml snippet:

  - save_cache:
      key: xyz-{{ .Branch }}-{{ .Revision }}
        - .git

“Machine” executor saving cache:

Creating cache archive...
Uploading cache archive...
Stored Cache to xyz
  * /home/circleci/project/.git

Docker executor restoring cache:

Found a cache from build 615568 at [...]
Size: 5.3 GB
Cached paths:
  * /home/circleci/project/.git

Downloading cache archive...
Unarchiving cache...
tar: home/circleci: Cannot mkdir: Permission denied
tar: home/circleci/project/.git: Cannot mkdir: No such file or directory
tar: home/circleci: Cannot mkdir: Permission denied

The “cache paths” is clearly wrong, the absolute path made it into the cache despite the config is for the relative path.

I have run into the same issue, only in my case between Docker and MacOS. I have currently been resorting to duplicating effort in the first MacOS job in my workflow, but that increases the amount of time required to execute the workflows, possibly by quite a lot depending on the work being redone.

Part of the issue here is when you move things between executors, you run into issues with file permissions due to the users having different UUIDs, even if the same users exist between executors.

For this reason we recommend against this. I’ll make sure we add a note in our docs that this isn’t recommended.

Using different executors is actually the reason why I’m using caching:

  • There’s a command (C1) which I can easily run with executor E1
  • There’s another command (C2) which I can easily run with executor E2, but not E1
  • Command C2 needs what command C1 generates (e.g. C1 might generate a AWS credentials file)

Given that I want to avoid manually installing the different CLI’s (e.g. kubectl, aws, terraform, etc) I’d rather use the cache to pass around data between executors.

Is there a better way to accomplish this?