Docker in docker not mounting volumes


#1

I feel like this is a bug unless can tell me otherwise. A stripped down version of my circle.yml is below:

version: 2.0

jobs:
  build:
    docker:
      - image: docker:17.03.2-ce
    working_directory: ~/repo
    steps:
      - checkout
      - setup_remote_docker
      - run:
          name: Build app
            command: |
              docker run -v $PWD/repo:/app ubuntu ls -la

When the job executes it fails due to the mounted volume being empty and not containing any files from the repository. I have tried to validate this by re-running the job with SSH and hopping into the build container and running the same command. I found changing the volume mount to “-v /:/app” and listing the files in /app shows the root directory as expected, but listing /root/repo doesn’t exist

ssh -p <port> <ip>
ls -la /root
      -rw-------    1 root     root           107 Jul  6 22:42 .ash_history
      drwxr-xr-x   34 root     root          4096 Jul  6 22:34 repo
    
> docker run -v $PWD/repo:/app ubuntu bash
>> cd /app && ls -la
     -rw-r--r--  1 root root 3182 Mar 27 17:00 .bashrc
     drwx------  4 root root 4096 Jul  6 22:28 .cache
     drwxr-xr-x  4 root root 4096 Jul  6 22:28 .config
     drwx------  3 root root 4096 Jul  6 22:28 .dbus
     drwx------  2 root root 4096 Mar 27 16:52 .gnupg
     drwxr-xr-x  3 root root 4096 Jul  6 22:28 .local
     -rw-r--r--  1 root root  140 Feb 20  2014 .profile
     -rw-------  1 root root 1024 Mar 27 16:53 .rnd
     drwx------  2 root root 4096 Jul  6 22:35 .ssh
>> exit

> docker run -v /:/app ubuntu bash
>> cd /app && ls
      bin  boot  data  dev  etc  home  initrd.img  lib  lib64  lost+found  media  mnt  nohup.out  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  vmlinuz
>> ls -la ./root
      -rw-r--r--  1 root root 3182 Mar 27 17:00 .bashrc
      drwx------  4 root root 4096 Jul  6 22:28 .cache
      drwxr-xr-x  4 root root 4096 Jul  6 22:28 .config
      drwx------  3 root root 4096 Jul  6 22:28 .dbus
      drwx------  2 root root 4096 Mar 27 16:52 .gnupg
      drwxr-xr-x  3 root root 4096 Jul  6 22:28 .local
      -rw-r--r--  1 root root  140 Feb 20  2014 .profile
      -rw-------  1 root root 1024 Mar 27 16:53 .rnd
      drwx------  2 root root 4096 Jul  6 22:35 .ssh

.circleci/config.yml with more than one Docker image
Running a script using docker compose
#2

It’s not a bug.
Unfortunately mounting volumes is not supported in Docker Executor:
https://circleci.com/docs/2.0/executor-types/#docker-benefits-and-limitations


#3

I think I just got bitten by this. Volumes seem to have been working fine for me, when the newest one I have added has broken my build. I’ve only just realised that all my previous volumes were fine when empty (since they haven’t actually mounted), and the new one needs to work (it’s for Mongo logs, and if Mongo cannot see the log file it refuses to start).

Out of interest, does anyone know what technical limitation causes this? Is there something about Docker-in-Docker that has this effect?


Docker-compose doesn't mount volumes with host files with Circle-Ci
#4

Ah, to answer my own question, I found the answer from elsewhere:

Volume mounts will not work on the docker executor because the Docker host running your container is different from the Docker host that you’re controlling with docker-compose. Remote volume mounting isn’t possible with our setup, but you can get volume mounting with the machine executor.


Best Practice to Use Multiple Containers with docker-compose
#5