A ruby-based build task has begun failing at the ‘Spin Up Environment’ step. It seems to correlate with some rolling updates to the docker images occurring right now. This is the output of the step:
Build-agent version 0.1.1073-1f69f340 (2018-11-20T18:07:03+0000)
Starting container circleci/ruby:2.5-node
image cache not found on this host, downloading circleci/ruby:2.5-node
2.5-node: Pulling from circleci/ruby
54f7e8ac135a: Already exists
d6341e30912f: Already exists
087a57faf949: Already exists
5d71636fb824: Already exists
0c1db9598990: Already exists
341f09e17d45: Pulling fs layer
569a895c540d: Pulling fs layer
8167a8de89c3: Pulling fs layer
341f09e17d45: Download complete
341f09e17d45: Pull complete
8167a8de89c3: Verifying Checksum
8167a8de89c3: Download complete
569a895c540d: Download complete
569a895c540d: Pull complete
8167a8de89c3: Pull complete
Digest: sha256:937822264394cc653444e15a65595624e358c3a8a03ecc66356af35c042b5914
Status: Downloaded newer image for circleci/ruby:2.5-node
using image circleci/ruby@sha256:937822264394cc653444e15a65595624e358c3a8a03ecc66356af35c042b5914
Error response from daemon: linux spec user: unable to find user circleci: no matching entries in passwd file
We’ve got a workaround - pin to the last working docker image sha256. You can grab that from the “spin up environment” step of the last successful build, then just include is in your circle config.
This is what we’re using for ruby-2.3.8-node-browsers: image: circleci/ruby@sha256:be06794b34768076ae9d8f94f8d7f3930d31ef866fbe3e6d53a4875d3ec245c0
We’re using Ruby 2.5.3, but seeing it when we start the redis image (which apparently happens before rails)
Build-agent version 0.1.1073-1f69f340 (2018-11-20T18:07:03+0000)
Starting container circleci/ruby:2.5.3-node
image is cached as circleci/ruby:2.5.3-node, but refreshing...
2.5.3-node: Pulling from circleci/ruby
Digest: sha256:937822264394cc653444e15a65595624e358c3a8a03ecc66356af35c042b5914
Status: Image is up to date for circleci/ruby:2.5.3-node
using image circleci/ruby@sha256:937822264394cc653444e15a65595624e358c3a8a03ecc66356af35c042b5914
Starting container circleci/postgres:10.5-alpine-ram
image is cached as circleci/postgres:10.5-alpine-ram, but refreshing...
10.5-alpine-ram: Pulling from circleci/postgres
Digest: sha256:edfc66901712728228ebe5d809e90f88e87f29a9415a1ed55f54cb985b05f579
Status: Image is up to date for circleci/postgres:10.5-alpine-ram
using image circleci/postgres@sha256:edfc66901712728228ebe5d809e90f88e87f29a9415a1ed55f54cb985b05f579
Starting container selenium/standalone-chrome
image is cached as selenium/standalone-chrome, but refreshing...
latest: Pulling from selenium/standalone-chrome
Digest: sha256:c882250410b740f57bb193ef1b742b46bc1c447b634c3f25ce2f2d1a233778f8
Status: Image is up to date for selenium/standalone-chrome:latest
using image selenium/standalone-chrome@sha256:c882250410b740f57bb193ef1b742b46bc1c447b634c3f25ce2f2d1a233778f8
Starting container redis
using image redis@sha256:19f4621c085cb7df955f30616e7bf573e508924cff515027c1dd041f152bb1b6
Error response from daemon: linux spec user: unable to find user circleci: no matching entries in passwd file
✗ docker run circleci/python:2.7.15-node
docker: Error response from daemon: linux spec user: unable to find user circleci: no matching entries in passwd file.
ERRO[0000] error waiting for container: context canceled
We have the same problem as well, pulling from circleci/ruby:2.5.3-node (circleci/ruby@sha256:937822264394cc653444e15a65595624e358c3a8a03ecc66356af35c042b5914) and circleci/postgres:9.6-alpine-ram (circleci/postgres@sha256:6bde827b91b15d6250882ca6610f43570e89ef3a5db842b052c2201415cc7554).
Seeing this too. I can confirm that specifying the sha in config.yml for the ruby image (I got mine from the output of my last successful CI run) mitigates the issue for now.
Update: The issue has been identified and fix applied. New images are already being published, but it will take a few hours to get through them all.
–
Thank you all for reporting.
Original Status
This is not solved, but I am locking the post while we investigate.
We are aware of the issue impacting many Docker images where the user is missing, causing builds to fail. The issue has been escalated to our engineers who are working the problem currently.
Workaround
As a workaround we do suggest pinning to the specific sha used on a previous successful build. You can find this in the “Spin Up Environment” step of a previous successful job.