Confused about what setup_remote_docker really does?


#1

I’m slightly confused about the way that setup_remote_docker interacts with the primary container listed in the config.yml.

Does primary container type really matter only very little in this world?

For example, in https://circleci.com/docs/2.0/building-docker-images/, we are defining the primary container to be golang:1.6.4, then calling setup_remote_docker.

This seems to mean “Put the golang:1.6.4 into the remote area” (I still don’t understand why this is even necessary… I guess its just some technical issue of CircleCI we don’t have context on?), then install Docker again within THAT Docker instance (Begs the question: is there just some docker image configured for this already so we can skip that step and list that as our primary container?). Now run docker build using your own Dockerfile in your repo. Any commands like run: docker mybuildname exec bundle install actually execute this newly built docker container within your primary docker container. Now push that newly-built docker container.

From the outside, this all just seems over-complicated… seems like there’s no benefit to using anything other than a plain ubuntu package as your primary container in this setup, assuming you want to run all the commands in the docker container that you are building with Dockerfile, and not the primary container.


#2

https://circleci.com/docs/2.0/building-docker-images/#example

1. All commands are executed in the primary container.
2. Once setup_remote_docker is called, a new remote environment is created, and your primary container is configured to use it.
3. All docker-related commands are also executed in your primary container, but building/pushing images and running containers happens in the remote Docker Engine.
4. We use project environment variables to store credentials for Docker Hub.

No. All the commands in your config are run from the base image, in this case golang:1.6.4.

When you run setup_remote_docker, we allocate a remote Docker engine. You are connecting to it via TCP. You can’t run Docker within Docker, as noted at the top of the documentation:

For security reasons, the Docker Executor doesn’t allow building Docker images within a job space.

The docs are just trying to be thorough. Using the ubuntu image is a bad practice- just build your own Docker image and install all your dependencies. There’s really no need to install any libs or packages in your config, outside of bundle install/pip install/composer install/etc.

It makes the world of difference. I have built an image for each 2.0 project I have touched.


#3
  1. All docker-related commands are also executed in your primary container, but building/pushing images and running containers happens in the remote Docker Engine.

@rohara Not sure what you mean. In fact docker related commands are executed on a remote engine (commands such as ps, info, run, exec)… Not sure about build/push.

If someone can clarify how build works it would be super nice. For build to succeed build should happen on the primary node where the file system context is, however build artifact (image) appears on the remote docker engine… This sounds like a magic hack or it’s too late.

Guys waiting for your ideas, thank you!


#4

That’s from our docs. The command itself is executed in the primary container, but any Docker functionality itself happens in the remote environment.

If you have artifacts in the remote environment, just docker cp them locally.

You can not build Docker within Docker. You can either build in the remote environment or on the machine.


#6