What happens when reaching the workspace storage limit?

Hello.

I use persist_to_workspace and attach_workspace to pass the produced builds from two build jobs to the upload job that uploads the builds as a github release. I don’t actually need the files in the workspace after that (they’re on github after the upload job finishes,) but apparently it’s not possible to delete the files in the workspace. It’s not even possible to overwrite files. My builds have the same name (like myproject-nightly-build-arm64.tar.xz), but the workspace just gets bigger anyway, it seems.

When not being able to delete workspace files, what happens when it gets full? Will the jobs fail when copying files to it and I’ll have to wait until the end of the month, or will older workspace layers get deleted automatically to free up space?

You may need to raise this as a support issue.

Currently the docs regarding Workspaces state

" You can customize storage usage retention periods for workspaces on the CircleCI web
app
by navigating to Plan > Usage Controls .

BUT the docs covering persisting data state

" Users on paid plans can customize storage usage retention periods for workspaces, caches, and artifacts on the CircleCI web app by navigating to Plan > Usage Controls ."

So currently it may be that you only have access to any controls over space used if you are on a paid plan.

That seems correct. If you’re on the free plan, there’s no usage controls in the plan section.

Still, I’d like to know what happens when the limit is reached.

Again that is not clear as it may be that there is no current usage limit as they now bill for storage. When the free plan page states x GB of 2GB that is not your limit, instead it is your free allocation. Beyond 2GB you end up being charged 420 credits per GB per month. So the limit becomes the credits on your account.

So if you use the storage space while on the free plan it would seem that you could very quickly use up all the available credits if you try and store large objects per build. It is also not possible to use your own storage external as an alternative as Egress traffic also incurs a per GB charge.

Thank you! That makes sense. I guess the best thing to do is always compress the data with something fast and with a good compression ratio (zstd is probably the best choice) to cut down on space usage.

If you are on Linux something like a tar file using zstd would make sense and such a idea was put forward a while ago

https://circleci.canny.io/cloud-feature-requests/p/add-zst-compression-as-optional-compression-method-to-persisttoworkspace

If you have a large number of files that remain very stable between builds you can also consider a repository at github/bitbucket as a push would not generate much egress traffic. It will very much depend on your data set - everything going into one large tarball that has to be constantly uploaded and download or a smaller set of updates being pushed to git with a full pull every time.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.