I’ve been trying to make our tests run faster on CircleCI. I’ve noticed that some of our builds have a very slow “restoring source cache” step. Usually this takes ~30 seconds. However, sometimes it takes up to 2 minutes. Is there anything we can do to reduce the time this step takes? For example, I’m wondering if I should be trying to hack the CircleCI checkout to be a shallow clone? That would probably reduce the file size significantly.
If there isn’t anything we can do to make this faster in all cases, is there something that we are doing that is causing the unusually long “restoring source cache” step?
One related conversation I found is the following, where it sounds like adding some “git repo hacking” steps to our build temporarily might help:
Unless I am mistaken, Circle already uses shallow clones.
I would also love to know how we can speed up the cache restore, since I already optimized some low hanging fruit and apart form ditching integration tests or something, I can’t co much further.
I’m in the same boat, this number swings wildly from 30s to 2m for our project too. I’ve been working on optimizing test performance and this is one thing that’s frustratingly out of my control.
I would also like visibility into this, or suggestions for how to improve the consistency and performance of restores. I’ve seen some very long (5 minute) restore times.
I’m just glad to hear that I’m not the only one who has run into this.
I’ve been tempted to try and shrink the size of the git repository on disk or something to make this go faster. For us, the .git directory for our repository is about 500 MB, and I’ve seen multiple minute restores. Are the people who are hitting slow restores also have big repositories?
I am also trying to improve the speed of the source cache step. My times are around 3-4 minutes. It is a large repo, however it seems to be slower than pulling directly from github.
In general the slow cache restores are the fault of connecting to S3. We do everything we can to mitigate it from happening, but ultimately all our fates are left up to AWS.
The posts in this thread are from different times and platforms and I will therefore close it. We address each incident individually when we receive support tickets.