Copy files from remote docker created from setup_remote_docker to circle ci primary container


I’m using setup_remote_docker with circle ci 2.1 in a job running on a docker executor.

    working_directory: ~/repo/api
      - image: circleci/elixir:1.6
      - setup_remote_docker

      - run:
          name: Build application Docker image
          command: |
            docker build -t my-tag --build-arg APP_ENV=$(APP_ENV) .

      # copy build artefacts from remote docker to circle ci primary container in order to cache it 
      - run: docker -D -H $DOCKER_HOST --tls --tlscert $DOCKER_CERT_PATH/cert.pem cp $DOCKER_MACHINE_NAME:/app/_build/. ./_build/

The dockerfile used by the docker build command is this one:

FROM elixir:1.6.6

# install dependencies
RUN mix local.hex --force && \
  mix local.rebar --force && \




ADD . .

RUN mix deps.get
RUN mix release --env=$MIX_ENV

SHELL ["/bin/bash", "-l"]

CMD ["/bin/bash", "-l", "/app/_build/prod/rel/creative_platform/bin/creative_platform", "foreground"]

The docker cp command at the end of the example does not work since the docker container (extracted from DOCKER_MACHINE_NAME) seems to not exist.

I think it is possible to access this remote docker engine since the step setup_remote_docker produces an output containing informations to access it, like this one:

Allocating a remote Docker Engine
Waiting for a Docker Engine assignment: ....................
Assigned Docker Engine request id: 50483050
Remote Docker engine created. Using VM 'default-bfd82105-046c-4732-a529-fe90b3e9df94'
Created container accessible with:

My question is: how can I access this remote docker engine in order to copy the build artefacts to the circle ci primary container in order to cache it?

1 Like

The remote docker can be ssd’ed into, so you may be able to use scp

This support center article explain how to connect to the remote Docker from inside the job.

Are you trying to cache your built image layers? CircleCI offers this as a paid service (, but there are other creative ways to export the image cache:

In my experience it takes more time to export the layers, cache them, and restore them, than it does it just let the layers rebuild every time. YMMV - I am using an alpine-based image, which drastically improved my build times.

1 Like

Hi @ajhodges, thanks for sharing that. I love seeing creative ways for folks to do things like this. Some people like to pay money for making things easy, but I’m a fan of figuring out things myself as well. :slight_smile:

For sure! I can appreciate why DLC is a paid feature. This workaround will probably only be useful to those with long image build times and relatively small images.

I also forgot to mention I dissected the approach in that blog post and published it as an orb - available here:

But please don’t contact me for support because I only did enough to get it working well enough to know that it wasn’t worth spending any more time on for my use case :slight_smile:

1 Like

Thanks @drazisil, I SSH’d into the CircleCi job and then in the remote docker image but this one looks empty (there is only a build folder and none of the build artefacts or even applications folders related to my app.
Even when I re start the build command while SSH’d into the main CircleCI VM (docker build -t my-tag --build-arg APP_ENV=$(APP_ENV) .) I don’t see anything in the remote docker image.

Same thing when I try to scp the _build folder produced on the remote docker to the base CircleCI docker image (scp remote-docker:/app/_build/ .), I get the error scp: /app/_build: No such file or directory.

It looks like the remote docker image I connect to is not the good one, or maybe I am not able to see its content with ssh / scp ?

I was trying to create volume from a folder, but it’s empty as well.

I successfully used scp to copy over my generated code coverage files for storage as artifacts. If anyone else is looking to do that, it looks something like this (assuming your working directory is called project):

scp -r remote-docker:~/project/coverage ~/project/coverage

For anyone coming along later
ssh remote-docker takes you onto the host running docker, not into the docker container itself.

Once on the remote host you need to run appropriate docker commands eg
docker run --rm -d --name container -it <image>:<tag> tail -f /etc/hosts
docker cp container:/path/to/content ~/content
scp remote-docker:content ~/project/content

You can probably chain it together with

  - run:
      name: copy from container
      command: |
        ssh remote-docker -C 'docker run --rm -d --name container -it <image>:<tag> tail -f /etc/hosts'
        ssh remote-docker -C 'docker cp container:/path/to/content ~/content'
        scp  remote-docker:content ~/project/content
        ssh remote-docker -C 'rm -rf content'
        ssh remote-docker -C 'docker stop container'

The steps I was performing were centred around docker build. YMMV

I’m waiting for AWS Role Assumption | Cloud Feature Requests | CircleCI Ideas and was looking at this as a way of working around the assume role limitation. I think I can use a multistage build instead