Hello,
I noticed that when I rerun my job with SSH enabled and then after logging into the machine and running a simple ubuntu image I have 90% of my disk space already used up. My tests require at least 20GB of disk space in these images so 50GB would be plenty but somehow I’m already almost full.
Do you have any idea why is it happening and how to fix it?
EDIT: Somehow using resource_class: large solved the issue, although that doesn’t change anything to the disk space when I ssh into the machine…
NB: When I ssh into the remote server I have access to a 36 threads/70GB of RAM machine with tons of circleci jobs executing. Is that the jobs of other customers? That’s quite scary…
Its sounds like you may be making use of Remote Docker. Remote Docker is a separate VM instance that is created and made available to the Docker Executor via the HOST environment variable.
When you run docker run -it ubuntu bash you are actually running that on the Remote Docker instance, not the Docker Executor container. That remote instance has 50GB of file storage available to it.
Is your build running docker build or other actions that may be filling up /var/lib/docker during the build? You can run docker system df from within an SSH session to check if that might be the case.
Otherwise, you can actually ssh into the Remote Docker instance from an re-run with ssh session. You would do ssh remote-docker while ssh’d into the primary container of the build.