No space left on device while creating mongo


This happens while creating a mongo container. Just started happening this AM.


Just FYI, this is not happening every time, but intermittently.


We’re looking into it. Thanks for reporting it.


Here’s another data point of no space left on device -


This has been resolved now and shouldn’t happen again.


This just happened to me on Circle 2.0, although I was doing something different than the original poster.
Admins, please see (private repo)
If you scroll to the bottom of the failed section, you’ll see:

ERROR: Failed to create usr/lib/libstdc++.a: No space left on device
ERROR: g++-5.3.0-r0: No space left on device


That’s the machine executor- can you retry with SSH and run df -h to track the available space? You can watch it with while;

while df -h


Happened on the Docker executor during a docker build.

ERROR: Failed to create usr/lib/jvm/java-1.8-openjdk/jre/lib/resources.jar: No space left on device
ERROR: openjdk8-jre-lib-8.121.13-r0: No space left on device

It looks like a rebuild fixed it.


Same issue with ‘docker build’ started yesterday, only some builds were failing, it’s looking more consistent today though.

Error response from daemon: Error processing tar file(exit status 1): write /vendor/cache/validates_overlap-0.8.2.gem: no space left on device
Exited with code 1

Rebuild doesn’t seem to help.

SSH into host and used space looks pretty normal


Running docker system prune -f has reclaimed ~9GB . I’ve now ran this on two hosts and I’m getting green builds.


Issue is still occurring, adding to our config.yml

  • run:
    name: Prune Docker cache (temporary)
    command: docker system prune -f


This is happening for us all the time now as well.


Same issue here at OSS
Started appearing ~1 week ago from time to time and today was reproducible almost on every build.

We’re using Remote Docker executor with reusable: true docker layer caching, docker-compose build with rabbitmq, mongo, postgresql containers which all fail to start because of no disk space available.

Filesystem in Docker:

circleci@84c09db3a452:~$ docker run --rm alpine:3.4 df -h
Filesystem                Size      Used Available Use% Mounted on
none                     98.4G     96.8G         0 100% /
tmpfs                     3.7G         0      3.7G   0% /dev
tmpfs                     3.7G         0      3.7G   0% /sys/fs/cgroup
/dev/sda1                98.4G     96.8G         0 100% /etc/resolv.conf
/dev/sda1                98.4G     96.8G         0 100% /etc/hostname
/dev/sda1                98.4G     96.8G         0 100% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                     3.7G         0      3.7G   0% /proc/kcore
tmpfs                     3.7G         0      3.7G   0% /proc/timer_list
tmpfs                     3.7G         0      3.7G   0% /proc/timer_stats
tmpfs                     3.7G         0      3.7G   0% /proc/sched_debug
tmpfs                     3.7G         0      3.7G   0% /sys/firmware


This is a different issue from the original post now… but can you try including this in your config?

docker images --no-trunc --format '{{.ID}} {{.CreatedSince}}' \
    | grep ' months' | awk '{ print $1 }' \
    | xargs --no-run-if-empty docker rmi


Thanks @rohara !
I got the point.
So docker system prune removed 94G of data, 99% of them where volumes which persisted when using docker remote executor with reusable: true.
Adding more cleanup into CircleCI build steps now:

docker volume prune --force