I see two ways in the docker executor to run an image:
docker: ... image:
setup_remote_docker
I need to run a fairly vanilla image (registry:2.6 FWIW) but load custom files on. I don’t see any way to do it
docker: ... image: path: no way to mount volumes or otherwise copy files
setup_remote_docker path: I can run docker run and then docker cp to get my files in, but then I cannot access the exposed service on port 5000. Even running docker run -p 5000:5000 doesn’t really help, since it is on a remote host; my local executor doesn’t have access to anything other than the remote docker port itself given by DOCKER_HOST.
Is there any sane way to both execute a custom docker image / add files to a standard one and access the services the running image provides?
Yup! Use Docker Compose, and bring up the thing that connects to your target container in another container.
The problem with publishing ports is a security one - if you had permission to do that then you would have to have control of the host’s networking stack, giving you admin control on the system that runs other customer’s builds. That is obviously not allowable.
However, if you bring up a series of Docker containers and wire them up with a virtual network - which is exactly what Compose does automatically - then you do not need to publish a port to the host. Just connect from one container to another, with no internal restrictions.
I was afraid of that. It basically becomes, “just like machine executor, but running in a container.” You lose most of the benefits of the docker executor. It is so very close, each part - docker executor and setup_remote_docker - does half, missing half, but the two halves cannot meet in the middle. No choice, eh?
In my experience, it has not caused me any trouble at all, in several different use cases. I can foresee a theoretical loss of benefits - I think the remote executors have their own RAM allocation, for example - but have you looked at whether your immediate use case would actually be negatively impacted?