FYI - Jobs that use the IP ranges feature and Remote Docker will begin to fast-fail this week

An FYI that starting this week, jobs that use the IP ranges feature with Remote Docker will fail instead of silently continuing to execute while not using the expected IP ranges functionality.

Remote Docker support for the IP ranges feature has always been noted as a known limitation. The goal of this behavior change is to give users transparency when functionality that they think is working is actually not behaving as intended.

Hi Sebastian! is there a way to use a job that builds docker images while using the ip ranges feature?



Curious as to why you removed this? It was a great feature (even if it wasn’t supposed to work it certainly has been for a long time).

Hey folks, when you enable the IP ranges feature on a job that uses Remote Docker, it would not send outgoing traffic through the set of IPs that are listed on our docs. As a result, we decided to be more explicit about the fact that the feature does not work as most folks would expect when using Remote Docker instead of silently letting the job continue while using the “wrong” IPs.

There unfortunately is no work-around. We hope to eventually add support for this functionality.

@tweeks-reify were you seeing different behavior? The outgoing traffic during job execution was going through the set of IPs I linked above?

Yup we used this for ~6 months with remote_docker and it worked flawlessly (hundreds of builds a day).

We were pulling the IPs from to allow traffic.

We have the same use case. This feel like an oversight by product, why not just show a warning and allow the job to continue.

Hey folks, I apologize for the confusion and pain this change caused, I’ll take full responsibility for that.

Let me be more precise and give a little bit of context into the motivation for making this change:

Before the change, the IP ranges feature when used in a Remote Docker job would not send outgoing traffic from the remote docker VM through the set of IPs. It would, however, send traffic from the primary container through the set of IPs.

The vast majority of users that we talk to who need to use the IP ranges feature in conjunction with Remote Docker need the outgoing traffic from the VM to use the set of IPs and do not care about the primary container. That is one reason we made this change, to avoid confusion where most users think that the feature is working in one way but in reality it is not.

Secondly, we are in the process of re-architecting how Remote Docker works all-up on CircleCI to be much more performant, reliable, and flexible. The new architecture currently does not send any outgoing traffic through the set of IPs if the IP ranges feature is enabled, not even traffic from the primary container.

In order to prepare for moving all Remote Docker jobs from the existing architecture to the new, more performant and reliable architecture; we removed the IP ranges functionality from Remote Docker jobs to avoid incompatibility issues. If we did not do that, users of the IP ranges feature with Remote Docker jobs would be stuck on a less reliable system in addition to missing out on the benefits of the new architecture. At this point in time, once a job is moved to the new architecture, all traffic goes through ephemeral IPs. The communication on this point, however, was inadequate, and I apologize for that. We did not introduce this new behavior to fail workflows intentionally. We want to give customers the best Remote Docker experience and this was one way to ensure all users get those benefits, not just ones that don’t use IP ranges.

I can understand that this feels like a regression in the platform to your teams. We do plan on adding “full functionality” for Remote Docker jobs with the IP ranges feature. You can track updates on enabling this functionality fully for Remote Docker jobs on Canny.

This re-architecture will actually enable us to provide that capability faster than we would have with the old architecture. Currently, we’ve seen users workaround this gap using either a Machine executor with a VPN, Kaniko in a normal Docker job, splitting the job into two so that the step that requires setup_remote_docker happens in a Docker job without Remote Docker, or a self-hosted Runner.

I’m looking into a temporary roll-back to give users a bit of time before re-enabling this change. I’ll post in here when I have more on that front.


Thanks for the detailed explanation @sebastian-lerner ! That clarifies why it was working for us (the step that used remote_docker did not rely on the IPs).

1 Like

We have rolled back this change temporarily.

The pause will last until August 12. On August 12, jobs that use IP ranges and Remote Docker will fail. Please see the list of workarounds listed above.

This is another option that we’ve seen users take advantage of to build Docker images without Remote Docker.

Hi folks,

Tools like Kaniko is useful, since it does not require a Docker daemon (and thus Remote Docker) to build and push Docker images to your registry (e.g., AWS ECR).

I have published a how-to guide here on how to use Kaniko to build and push your Docker image(s) to Docker Hub and AWS ECR.
I hope this may be useful! :nerd_face: