Update
We’ve added a warning message when auth is invalid, so you should see:
Warning: No authentication provided, this pull may be subject to Docker Hub download rate limits.
Update
We’ve added a warning message when auth is invalid, so you should see:
Warning: No authentication provided, this pull may be subject to Docker Hub download rate limits.
Any word on authenticating to orbs like aws-ecr? Even if CircleCI and Docker work out a deal to avoid throttling, it seems useful to allow non-anonymous pulls. And if you don’t work out a deal, Nov 1 is only two weeks away now.
We’re working on it right now, and will hopefully have something official to share very soon that will make it easy.
@trevorr We do have a support article about workarounds for orbs right now if you’re interested!
Any direction you can provide me on this one.
Great, thank you!
For people that look for this message, it’s approximately the fifth line in the ‘Spin up environment’ task. For example:
Build-agent version 1.0.41417-4036f5a3 (2020-10-16T14:37:07+0000)
Docker Engine Version: 18.09.6
Kernel Version: Linux 8dc5fabcbe7d 4.15.0-1077-aws #81-Ubuntu SMP Wed Jun 24 16:48:15 UTC 2020 x86_64 Linux
Starting container cibuilds/hugo:0.62.2
Warning: No authentication provided, this pull may be subject to Docker Hub download rate limits.
image cache not found on this host, downloading cibuilds/hugo:0.62.2
0.62.2: Pulling from cibuilds/hugo
...
fyi, I added this to the first step of each job to automatically verify the build with a connection error when it’s running. (In short, it’s redirecting docker domains to localhost to fail the connection)
- run: |
echo "127.0.0.1 registry-1.docker.io auth.docker.io index.docker.io dseasb33srnrn.cloudfront.net production.cloudflare.docker.com" | sudo tee -a /etc/hosts
sleep 5
You would still have to manually verify the docker executor image yourself.
Any recommendations on how to enable authenticating with Docker in forked PRs?
So far, the only way I can think of is to enable “Pass secrets to builds from forked pull requests” and then utilize contexts to limit which env variables are accessible to the jobs being run in forked PRs.
In other words, it seems that if you’re using Docker executors in jobs that are exposed to forked PRs, there is no way to protect your Docker username and access key.
Would be nice if CircleCI added project or org-level authentication that would take place outside of the pipeline.
I really want to suggest (again) that something higher level that required less adjustment of individual steps would really have been ideal.
Also, w/r/t orbs, it would be nice if Circle could publish a convention (e.g., DOCKERHUB_USER
/ DOCKERHUB_PASSWORD
that would be relatively standard so that orbs could (if people followed the convention) take those from environment and / or a context without additional parameter configuration.
As far as both orbs and the convention, I see a very interesting comment from @KyleTryon in the following comment with a little tease:
Also, this confirms that DOCKERHUB_USERNAME
would probably be the convention for anything that did need to have those magic env vars (re: my post above)
Good news!
CircleCI has partnered with Docker to ensure that our users can continue to access Docker Hub without rate limits. On November 1st, with few exceptions, you should not be impacted by any rate limits when pulling images from Docker Hub through CircleCI.
However, these rate limits may go into effect for CircleCI users in the future. That’s why we’re encouraging you and your team to add Docker Hub authentication to your CircleCI configuration and consider upgrading your Docker Hub plan, as appropriate, to prevent any impact from rate limits in the future.
This is great news (obviously, would have been greater before we scrambled to build tooling around updating thousands of configs, but still, great news, and we’re glad there was some kind of “last hour” agreement).
Any details on how this actually works under the hood (are there implicit credentials or env variables setup? or some kind of mirroring, like what GCP uses?) I assume (given the “few exceptions” comment) that this applies to images other than Circle’s own convenience images?
I can give you a rough answer here - the short version is that it relies on CircleCI identifying the pulls coming from our system to Docker Hub. There’s no mirroring, so there’ll be no issues with staleness or desynchronisation.
The FAQ article from the support team has more details about exactly what is and isn’t affected.
As a user with dozens and dozens of build configurations, I say:
PLEASE add a simple UI in CircleCI account settings where we can specify Docker Hub credentials. This would be so much better for everyone to specify those there.
Those would be global ones applying to all builds with Docker executor.
“with few exceptions”
Sorry, what would those be?
I’m a little confused on the “remote docker” stuff. In this more or less real world scenario, let’s say I start a build on image myorg/deploy:latest
Within my config, imagine I have a step like
jobs:
build-and-push:
docker:
- image: myorg/deploy
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- [auth against private registry]
- docker build ...
- docker push pri.vate/foo/someotherimage
In this scenario, am I good, assuming the base I’m building on is also a private image, not myorg/deploy
? What about if the base is a Dockerhub image, but I’m pushing to a private image? In this case, I assume that authing is better; does running an auth step in the main container accomplish this for the remote docker engine (I assume so, since it does work for our private registry).
In other words, does the caveat about this not applying to remote docker require additional auth? Does setup_remote_docker
itself also take or require auth params?
Hi @deeTEEcee! Remote Docker and Machine Executors will be impacted by the rate limiting unless pulling CircleCI-published images.
You can find more info in this FAQ
Sorry got lost in this thread.
just want to know the orb work status. Any other thread/notification I can follow for orb work?
The FAQ says:
“However, custom docker images used with the Docker executor won’t be affected because of the previously mentioned partnership.”
Can you confirm how this affects orbs?
Does this mean that 3rd party orbs (for example, https://circleci.com/developer/orbs/orb/circleci/gcp-cli), won’t be affected by this on Nov 2nd because even though they pull images that aren’t from the circleci
or cimg
Docker Hub namespaces, that pull is happening from a Docker executor, and all Docker executors are covered by this partnership?
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.