Deploy to Docker Swarm on AWS

Hey all,

I’m wondering if anyone has managed to attach to their circle docker client to a swarm and deployed to it?

I’ve gotten as far as building and pushing my image, and I can connect locally to a swarm, but I can’t seem to do the auth in circle. If anyone has managed it, I’d love to know how.

Thanks,
Kevin

2 Likes

@FelicianoTech any pointers here?

I don’t have experience with Docker Swarm, sorry. :frowning:

Hello @skinofstars,

Thank you for your question!

After a google search I was able to find this blog post:

Hopefully that helps you, please don’t forget to share your solution back here!

Best, Zak

Thanks for the article. However, that’s really more around how one pushes/pulls from docker hub, rather than connecting to a swarm.

I think my problem is more around how I attach to the swarm. I’ve started a thread over at the docker forums.

1 Like

I’ve got this working but it feels so hacky.
I read your post on the docker forum as well and I agree that option number one is ideal so that’s what I worked towards as well.

docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client help

That shows a lot of information; namely that you can use -u and -p to specify the username and password. So:

docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client -u $DOCKER_USER -p $DOCKER_PASS $SWARM

Will remove the need for it to be interactive.
The next problem was that by running the export command that is output, you no longer have access to a docker host. The docs state:

For security reasons, the Docker Executor doesn’t allow building Docker images within a job space.

So the issue is that the container that is started by the dockercloud client isn’t on the job host. It’s on a remote host already. So changing DOCKER_HOST on in the job space removes access to the remote host.
Instead, what I did was start another container in the remote host to run the swarm commands, like so:

docker run --rm -e DOCKER_HOST=172.17.0.1:32768 -v /var/run/docker.sock:/var/run/docker.sock docker:17.03.1-ce docker service list

Note the IP address is not the IP that is output by the dockercloud client command. It is the internal (inside the docker bridge network) IP that is assigned to the dockercloud client-proxy container. Also note that I specified the version of docker to match the version running in the swarm.

This all feels really hacky to me and it would be nice is someone from CircleCI could comment on if there is a more straight forward way to accomplish this. Or if there are more improvements coming in 2.0 that would make this easier.

Hope that helps.

3 Likes

I’m also working on a same exact scenario and I’m having the same exact issue due to the remote docker host of circleci.

Any help in order to make this more straight forward/simple/automated is highly appreciated

docker inspect -f \'{{(index (index .NetworkSettings.Ports "2375/tcp") 0).HostPort}}\' client_proxy_swarm_name

I just grabbed the host forwarded port of the running proxy client then applied it to the environment

My use case was in jenkins but i’m sure you can do the same in circle ci