To more efficiently utilise a remote docker resource with parallelism, I’m trying to get it to persist throughout the whole workflow.
I’m trying to achieve this by having a job that requests a remote workflow and saves the env vars/certificates related to it into the workspace. In the next job, I then grab the files from the workspace and source the vars into the current job.
However, it seems that this doesn’t work, as the remote docker instance has been spun down. This sucks because I’d love for the next job to utilise the containers that have already been pulled without having to pull them again, or wait for caching etc. It would also be great to be able to use the same remote docker instance to run parallel jobs, eg. unit/integration tests, as they could then all take advantage of the same shared services on the remote docker while maintaining parallelism (reducing startup time for other jobs).
By keeping a primary container running that requested a remote docker, exported the env vars then simply slept while the other jobs started, I was able to achieve the desired result. But it means sacrificing a constantly running container, which is a bit annoying. Can anybody think of a better way of doing this?
I also tried keeping a random container running in the remote_docker, hoping that it wouldn’t exit if it had a container in it, but no dice.
In summary: Is there any way to keep the remote docker instance online for the remainder of the workflow, or detaching the remote docker instance away from the lifecycle of the job, without wasting a container just to keep it up? Could I, for example, SSH to it and have it do something to keep it alive, or is the lifecycle of the remote docker instance intrinsically linked to the lifecycle of the job that spawns it?
Thanks!