Why would you NOT use setup_remote_docker for docker executors

AFAICT, if you use setup_remote_docker you get an execution environment of:

  • 4 CPU, 15 GB RAM for 20 credits/min
  • 4 CPU, 8 GB RAM for 20 credits/min

without the setting.

What am I missing? Why would you not use the setup_remote_docker modifier for all jobs (at the medium and large resource class level)?

I can not say why it was decided to vary the RAM allocation, for setup_remote_docker environments, but the primary restriction that would stop most people using it as a general CI environment is the lack of control over the environment provided.

The images provided for ‘setup_remote_docker’ are only focused on providing a basic Linux system that can run different releases of docker. With the recent changes to the support list, this has become a very tight focus on the very latest 3 docker versions.

In terms of setup_remote_docker vs ‘docker’ environment the issue comes down to where you want the ‘run’ commands executed. For a ‘docker’ environment the ‘run’ commands are executed within the first container/image loaded. setup_remote_docker like ‘machine’ causes the ‘run’ commands to be executed in the main environment so that the full functionality of the docker or docker compose commands can be accessed.

1 Like

The following old thread provides a little more info.

It does not answer your current question instead, it shows that in the past that there was an even larger difference between the offerings as the original meaning of ‘remote’ was just that a controlling environment would be created based on the resource class specified in config.yml and then an additional medium-sized linux environment would be created where the docker commands were run. Only the first environment was charged for.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.