Hello,
I’m running container agent for builds with the following config:
agent:
resourceClasses:
me/small-runner:
token: "mytoken"
spec:
containers:
- resources:
limits:
cpu: 500m
memory: 2048Mi
me/large-runner:
token: "mytoken2"
spec:
containers:
- resources:
limits:
cpu: 4
memory: 8192Mi
And the build config is as follows:
version: 2.1
jobs:
docker-build:
docker:
- image: docker:dind
resource_class: me/small-runner
parameters:
appName:
type: string
steps:
- setup_remote_docker
- checkout
- attach_workspace:
at: ~/project
- run:
name: Build application Docker image
command: |
echo "$DOCKER_CONTAINER_REGISTRY_TOKEN" | docker login --username myuser --password-stdin ghcr.io
docker build --build-arg APP=<< parameters.appName >> -t << parameters.appName >> .
workflows:
docker-build:
jobs:
- docker-build:
context: Cluster
name: example-build
appName: Example
filters:
branches:
only:
- circle_ci_test
When I try to run a build, I’m getting an error saying “This job was rejected because the setup_remote_docker feature only supports jobs using the Docker executor
”.
I’ve been trying to find documentation around this but I’m not quite finding anything… any clue how to get this working?
Thanks!
1 Like
Hi @jlourenco, at this time, setup_remote_docker is not supported on the container runner. Our FAQs call out our recommendation for building Docker images with container runner. We also have seen customers have success with Kaniko.
I’ll make sure to add this to the limitations section to make it more obvious in the docs.
HI @sebastian-lerner any guides how to setup Kaniko to work with container runner? There is no documentation on this topic.
@mmilisav13 Right now we only have the docs for Buildah, but let me see if we can get something up for Kaniko as well. I’ll report back when I have more.
@sebastian-lerner Any updates on this topic ? I’ve tried to use kaniko image as primary container and it fails on this error. Any recommendation what should be the primary container when using container runners?
@mmilisav13 still chasing this down. Can you confirm you’re using the latest version of container runner? It should be visible in your “Self-hosted Runners” inventory screen from the left-hand nav of the UI. Can you also share a build link where the job is failing? You can send via direct message if you’d prefer
I think we figured it out
1 Like
We are using 1.0.14465-111e087 version. Here is the link from one failure.
The kaniko image does not seem to have a shell installed which is what’s causing that error.
It should be possible from your CircleCI config to add an explicit entrypoint: https://circleci.com/docs/configuration-reference#docker. Can you try adding entrypoint: sh
in your circleci config right after the image key?
There’s a possibility though that the image has sh
missing in which case it won’t work…in which case you would need to build your own image that is based off of the kaniko image. I am trying to get an example on our docs but it may take me a few days.
Hey @sebastian-lerner,
We tried to run quite a lot of tests and got nothing successful.
We ended up creating a docker image:
FROM gcr.io/kaniko-project/executor:debug
LABEL com.circleci.preserve-entrypoint=true
SHELL [ “/busybox/sh”, “-c” ]
RUN mkdir /sbin &&
ln -s /busybox/sh /sbin
ENTRYPOINT [“/sbin/sh”]
Also tried with:
ENTRYPOINT [“/sbin”]
ENTRYPOINT [“/bin/sh”]
ENTRYPOINT [“sh”]
ENTRYPOINT [“tail”, “-f”, “/dev/null”]
We tried a few more things but no matter what we do it always gives us the log:
command terminated with exit code 139
which indeed according to your documentation https://circleci.com/docs/troubleshoot-self-hosted-runner#image-has-a-bad-entrypoint is a bad entrypoint.
Having this said, we tried everything and nothing seems to be working, and having read that I think we are using a correct config even though its not working.
One of our internal support engineers got it to work using the CircleCI base convenience image and then adding the kaniko binaries.
FROM cimg/base:current
ARG user=circleci
# Copy Needed Files from Kaniko Image
COPY --from=gcr.io/kaniko-project/executor --chown=$user /kaniko/executor /kaniko/executor
COPY --from=gcr.io/kaniko-project/executor --chown=$user /kaniko/docker-credential-gcr /kaniko/docker-credential-gcr
COPY --from=gcr.io/kaniko-project/executor --chown=$user /kaniko/docker-credential-ecr-login /kaniko/docker-credential-ecr-login
COPY --from=gcr.io/kaniko-project/executor --chown=$user /kaniko/docker-credential-acr-env /kaniko/docker-credential-acr-env
COPY --from=gcr.io/kaniko-project/executor --chown=$user /kaniko/.docker /kaniko/.docker
COPY --from=gcr.io/kaniko-project/executor --chown=$user /kaniko/ssl/certs /kaniko/ssl/certs/
COPY --from=gcr.io/kaniko-project/executor --chown=$user /etc/nsswitch.conf /etc/nsswitch.conf
# Setting Enviroment Variables for Kaniko
# ENV HOME /root
# ENV USER root
# FYI, ENV HOME and USER should not be set
ENV PATH="${PATH}:/kaniko"
ENV SSL_CERT_DIR=/kaniko/ssl/certs
ENV DOCKER_CONFIG /kaniko/.docker/
ENV DOCKER_CREDENTIAL_GCR_CONFIG /kaniko/.config/gcloud/docker_credential_gcr_config.json
could you try that?
That seems to have worked, at least we have a new error which seems to be registry auth on our side which is totally normal.
We’ll let you guys know the result once we have it working
Thanks!
1 Like