Hey @agentreno! We’re still working out the details for orbs. We hope to be able to resolve some of this with Docker so it doesn’t land solely on users’ shoulders, but we don’t want to make any promises right now. We will definitely release an official statement before the Nov 1 deadline. As soon as I have info, I will be sure to share it here.
Wow guys, this seems absurd to make us solve this if we’re using your own images. Why can’t you cache your images on your end rather than putting all of the onus on Docker Hub to provide free bandwidth to you? It seems reasonable that they’d want to limit this, and as I’d imagine the vast majority of your customers are using your stock images, it seems like caching them on your end would all but solve this issue.
Unfortunately caching is not a viable solution in this case.
Because docker image tags are not immutable, the presence of a tag on the local machine doesn’t mean it’s the image you want. It could also be that you don’t have permission to access an image that is cached on the disk.
In order to answer the questions of “is this the latest version?” and “can the current user use this image?”, we need to make a manifest request to the docker registry - which is where the rate limits are applied.
These problems are the same both when using the docker cache on disk, or deploying a caching proxy.
Docker’s original rate limit plans were to limit layer downloads - which would have benefitted from additional caching - but those plans were changed due to feedback from the user base because they were deemed too complicated to reason about.
We are working closely with Docker to provide a better solution, but until that change is finalised the most reliable way to avoid being affected is to add authentication.
How do I tell that I need to add Docker authentication?
I am using the following images:
We have a FAQ article available from our support team now
In all likelihood, you’ll need to authenticate. If you hit the rate limit, you’ll get a standard API error showing you that you’ve hit the Docker Hub limit.
I had the same question and hope we can confirm this somehow soon. I tried using valid credentials and then invalid credentials and in both cases my build was successful and the logs appeared the same each time.
I have other questions:
- is there a better way to provide the password credentials? I am not sure about providing the credentials and commit them to a repo which I have to share with other developers from time to time.
- do i have to repeat the credentials when the structure is like this:
version: 2.1 jobs: build: docker: # specify the version you desire here # use `-browsers` prefix for selenium tests, e.g. `3.7.7-browsers` - image: circleci/python:3.7.7 # adding dockerhub auth because dockerhub change their policy # See https://discuss.circleci.com/t/authenticate-with-docker-to-avoid-impact-of-nov-1st-rate-limits/37567/23 # and https://support.circleci.com/hc/en-us/articles/360050623311-Docker-Hub-rate-limiting-FAQ auth: username: secret password: even_more_secret environment: DATABASE_URL: postgresql://root@localhost/circle_test?sslmode=disable USE_DOCKER: no # Specify service dependencies here if necessary # CircleCI maintains a library of pre-built images # documented at https://circleci.com/docs/2.0/circleci-images/ - image: circleci/postgres:10.3 # database image for service container available at `localhost:<port>` # adding dockerhub auth because dockerhub change their policy # See https://discuss.circleci.com/t/authenticate-with-docker-to-avoid-impact-of-nov-1st-rate-limits/37567/23 # and https://support.circleci.com/hc/en-us/articles/360050623311-Docker-Hub-rate-limiting-FAQ auth: username: secret password: even_mroe_secret environment: # environment variables for database POSTGRES_USER: root POSTGRES_DB: circle_test working_directory: ~/repo
Thanks for the reply Alican! I set up Docker authentication for my CircleCI jobs so I should be good now.
I’m not super happy that Docker is now going to make data profiles on all of us based on the images we pull, but that’s not something CircleCI can do much about of course.
We decided to use jfrog to cache docker hub artifacts using jfrog’s remote and virtual repositories.
but we don’t want to make any promises right now. We will definitely release an official statement before the Nov 1 deadline
I think the issue is, if the answers come sometime before the deadline, that doesn’t give people a lot of time to make the changes they may need to make.
I’m wondering, I think you all are still split between AWS and GCP, but for the stuff on GCP, would configuring Docker on the GCP hosts, at least, to use the gcr.io dockerhub mirroring (https://cloud.google.com/container-registry/docs/pulling-cached-images) work?
Is Circle considering hosting its own convenience images on a separate registry and / or mirroring them to GCR / ECR / etc? That away, at least users could use the same auth for their own private images and Circle’s convenience ones (assuming permissions were open enough).
Would it be feasible to have magic global environment variables for this so that we don’t have to change hundreds of lines of code?
Nthing this. We use orbs extensively, but also have a lot of workflows which don’t currently have a context. So we’d have to modify hundreds or even thousands of individual projects’ configs to add a context where one isn’t now, even if we added the auth piece to our executors in orbs.
Even if it’s an organization-wide flag, or an env var that only gets set if credentials are defined, something along these lines would make this much more practical than updating contexts and / or env variables in tons of projects.
I added Docker authentication to my CircleCI builds, but how can I see if this works ahead of the November 1 deadline?
The ‘Spin up environment’ step has no information about authentication nor does it mention working with the username. The FAQ links to a page that discusses setting up authentication, but without information on how to verify if this works.
In my Docker account I can neither find information on usage pulls.
I’m using YAML’s merge feature to reduce duplication in a given config file. For instance:
version: 2.1 docker-auth: &docker-auth auth: username: $DOCKERHUB_USERNAME password: $DOCKERHUB_PASSWORD workflows: version: 2 workflow: jobs: - job1: context: dockerhub - job2: context: dockerhub jobs: job1: docker: - image: circleci/whatever <<: *docker-auth steps: - checkout job2: docker: - image: circleci/whatever <<: *docker-auth steps: - checkout
Agree. I have the same question
Thank you. I will try this and update here again
My organization has the performance plan, which should allow us to get the silver support plan automatically I believe. I opened a support ticket for this question below but have not been able to get a response yet, and we need to update a large number of CircleCI config files correctly before Nov 1. Any help would be appreciated. Thanks
We are using a docker executor job like so below. We have a context setup with the docker credentials. With the changes to docker hub rate limiting and the required authentication for docker hub, we are authentication for the primary container setup in the job below.
My question is whether these credentials setup in the primary container are passed to the remote environment setup by the “setup_remote_docker” step when it executes docker building/pushing and running commands or we need to add another step explicitly for docker login before running the docker run command, which I assume will run the remote environment.
This is the document I read and its not clarified in this document. I should mention you need “setup_remote_docker” as a step if you use the docker executor, so it cannot just be removed.
- image: circleci/python:3.7.3-stretch
name: Run tests
docker run --rm -t --net=host
git rev-parse HEAD
/bin/sh -c “echo placeholder”
Thank you, I think it works, but we only know for sure come Nov 1st
They may have resolved this by now. I did a test yesterday following the new tutorial in the docs for authenticating your executor setup (https://circleci.com/docs/2.0/private-images/). I added a few extra characters to my password and then the job started to fail, saying it couldn’t pull the image:
Error response from daemon: Get https://registry-1.docker.io/v2/cimg/node/manifests/14.13.1: unauthorized: incorrect username or password