I’m wondering if anyone has managed to attach to their circle docker client to a swarm and deployed to it?
I’ve gotten as far as building and pushing my image, and I can connect locally to a swarm, but I can’t seem to do the auth in circle. If anyone has managed it, I’d love to know how.
I’ve got this working but it feels so hacky.
I read your post on the docker forum as well and I agree that option number one is ideal so that’s what I worked towards as well.
docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client help
That shows a lot of information; namely that you can use -u and -p to specify the username and password. So:
Will remove the need for it to be interactive.
The next problem was that by running the export command that is output, you no longer have access to a docker host. The docs state:
For security reasons, the Docker Executor doesn’t allow building Docker images within a job space.
So the issue is that the container that is started by the dockercloud client isn’t on the job host. It’s on a remote host already. So changing DOCKER_HOST on in the job space removes access to the remote host.
Instead, what I did was start another container in the remote host to run the swarm commands, like so:
docker run --rm -e DOCKER_HOST=172.17.0.1:32768 -v /var/run/docker.sock:/var/run/docker.sock docker:17.03.1-ce docker service list
Note the IP address is not the IP that is output by the dockercloud client command. It is the internal (inside the docker bridge network) IP that is assigned to the dockercloud client-proxy container. Also note that I specified the version of docker to match the version running in the swarm.
This all feels really hacky to me and it would be nice is someone from CircleCI could comment on if there is a more straight forward way to accomplish this. Or if there are more improvements coming in 2.0 that would make this easier.