Disk space limited in docker image

Hello,
I noticed that when I rerun my job with SSH enabled and then after logging into the machine and running a simple ubuntu image I have 90% of my disk space already used up. My tests require at least 20GB of disk space in these images so 50GB would be plenty but somehow I’m already almost full.
Do you have any idea why is it happening and how to fix it?

$ docker run -it ubuntu bash
root@host:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
none             50G   42G  4.9G  90% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.7G     0  3.7G   0% /sys/fs/cgroup
/dev/sdb         50G   42G  4.9G  90% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           3.7G     0  3.7G   0% /sys/firmware

EDIT: Somehow using resource_class: large solved the issue, although that doesn’t change anything to the disk space when I ssh into the machine…

NB: When I ssh into the remote server I have access to a 36 threads/70GB of RAM machine with tons of circleci jobs executing. Is that the jobs of other customers? That’s quite scary…

Its sounds like you may be making use of Remote Docker. Remote Docker is a separate VM instance that is created and made available to the Docker Executor via the HOST environment variable.

When you run docker run -it ubuntu bash you are actually running that on the Remote Docker instance, not the Docker Executor container. That remote instance has 50GB of file storage available to it.

I did a quick test with the following config

version: 2.1

jobs:
  build:
    parameters:
      resource:
        type: string
    resource_class: << parameters.resource >>
    docker:
      - image: cimg/base:stable
    steps:
      - setup_remote_docker
      - run: docker run --rm ubuntu df -h

workflows:  
  workflow:  
    jobs:  
      - build:  
          matrix:  
            parameters:  
              resource: ["medium", "large"]

The output of both the medium and the large resource class displayed similar output:

#!/bin/bash -eo pipefail
docker run --rm ubuntu df -h
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu

3a23efb7: Pulling fs layer 
fc6f11f0: Pulling fs layer 
Digest: sha256:703218c0465075f4425e58fac086e09e1de5c340b12976ab9eb8ad26615c3715
Status: Downloaded newer image for ubuntu:latest
Filesystem      Size  Used Avail Use% Mounted on
none             97G  2.8G   95G   3% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.7G     0  3.7G   0% /sys/fs/cgroup
/dev/sda1        97G  2.8G   95G   3% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           3.7G     0  3.7G   0% /sys/firmware
CircleCI received exit code 0

I then SSH’d into the medium resource class job and the output seems consistent with the initial runs:

$ docker run --rm ubuntu df -h
Filesystem      Size  Used Avail Use% Mounted on
none             97G  2.8G   95G   3% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.7G     0  3.7G   0% /sys/fs/cgroup
/dev/sda1        97G  2.8G   95G   3% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           3.7G     0  3.7G   0% /sys/firmware

Is your build running docker build or other actions that may be filling up /var/lib/docker during the build? You can run docker system df from within an SSH session to check if that might be the case.

Otherwise, you can actually ssh into the Remote Docker instance from an re-run with ssh session. You would do ssh remote-docker while ssh’d into the primary container of the build.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.