Below are the error we are getting from Circle CI, while creating the docker image (remote docker)
The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 0E98404D386FA1D9 NO_PUBKEY 6ED0E7B82643E131 NO_PUBKEY F8D2585B8783D481
You will need to share your config.yml file or at least the parts involved in setting up and configuring your environment and details regarding which step is failing.
That type of error can be thrown by apt-get, but at the moment that is all I can say.
A write-up of the apt-get error can be found here if that helps.
We are getting issue while building/creating docker image. Below is the config.yml snippet
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr@7.3.0
aws-ecs: circleci/aws-ecs@2.2.1
aws-cli: circleci/aws-cli@2.1.0
executors:
build_docker:
machine:
image: ubuntu-2004:202010-01
docker_layer_caching: true
jobs:
airflow-deploy:
executor: aws-cli/default
steps:
- checkout
- aws-cli/setup:
aws-access-key-id: AWS_ACCESS_KEY
aws-region: AWS_REGION
aws-secret-access-key: AWS_SECRET_KEY
- run:
working_directory: /home/circleci/project
name: Sync DAGs to S3
command: |
chmod 777 sync.sh
./sync.sh
package-lambdas:
executor: aws-cli/default
steps:
- checkout
- aws-cli/setup:
aws-access-key-id: AWS_ACCESS_KEY
aws-region: AWS_REGION
aws-secret-access-key: AWS_SECRET_KEY
- run:
working_directory: /home/circleci/project/lambdas
name: Create Zip, upload to S3, update Lambda
command: |
cd daily_textract
pip install -r requirements.txt --target ./package
cd package && zip -r …/daily_textract.zip .
cd … && zip -g daily_textract.zip *.py
aws s3 cp daily_textract.zip s3://ngc-ingest-lambdas/daily_textract.zip
aws lambda update-function-code --function-name daily_textract --s3-bucket ngc-ingest-lambdas --s3-key daily_textract.zip
build-ingest:
docker:
- image: cimg/python:3.10.2
steps:
- checkout
- restore_cache:
key: deps1-{{ .Branch }}-{{ checksum “dags/requirements.txt” }}
- run:
working_directory: /home/circleci/project
name: Install Python deps in a venv
command: |
python3 -m venv venv
. venv/bin/activate
pip install -r dags/requirements.txt
- save_cache:
key: deps1-{{ .Branch }}-{{ checksum “dags/requirements.txt” }}
paths:
- “venv”
- persist_to_workspace:
root: .
paths:
- .
pytest:
docker:
- image: cimg/python:3.10.2
steps:
- attach_workspace:
at: .
- run:
working_directory: /home/circleci/project
name: Run Pytest
command: |
. venv/bin/activate
pytest -s -k ‘not runtime_baseline’ --ignore tests/test_dags.py
yapf-ingest:
docker:
- image: cimg/python:3.10.2
steps:
- attach_workspace:
at: .
- run:
working_directory: /home/circleci/project
name: Run YAPF
command: |
. venv/bin/activate
yapf --diff --recursive --exclude ‘venv/’ --exclude ‘lambdas//package/’ .
ecs-update-ingest-stg:
docker:
- image: cimg/python:3.10.2
steps:
- setup_remote_docker:
docker_layer_caching: true
- aws-cli/setup:
aws-access-key-id: DEV_AWS_ACCESS_KEY
aws-region: AWS_REGION
aws-secret-access-key: DEV_AWS_SECRET_KEY
- run: |
aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY} &&
aws configure set aws_secret_access_key ${DEV_AWS_SECRET_KEY} &&
aws iam get-user
- run: |
temp_role=$(aws sts assume-role --role-arn $AWS_DEPLOYMENT_ROLE_ARN --role-session-name “role_session”)
echo “export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq .Credentials.AccessKeyId | xargs)” >> $BASH_ENV; source $BASH_ENV;
echo “export AWS_SECRET_ACCESS_KEY=$(echo $temp_role | jq .Credentials.SecretAccessKey | xargs)” >> $BASH_ENV; source $BASH_ENV;
echo “export AWS_SESSION_TOKEN=$(echo $temp_role | jq .Credentials.SessionToken | xargs)” >> $BASH_ENV; source $BASH_ENV;
- aws-ecs/update-service:
cluster-name: “ngc-cluster-staging”
container-image-name-updates: “container=ingest-staging,tag=latest”
family: “ingest-staging”
ecs-update-ingest-prod:
docker:
- image: cimg/python:3.10.2
steps:
- setup_remote_docker:
docker_layer_caching: true
- aws-cli/setup:
aws-access-key-id: DEV_AWS_ACCESS_KEY
aws-region: AWS_REGION
aws-secret-access-key: DEV_AWS_SECRET_KEY
- run: |
aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY} &&
aws configure set aws_secret_access_key ${DEV_AWS_SECRET_KEY} &&
aws iam get-user
- run: |
temp_role=$(aws sts assume-role --role-arn $AWS_DEPLOYMENT_ROLE_ARN --role-session-name “role_session”)
echo “export AWS_ACCESS_KEY_ID=$(echo $temp_role | jq .Credentials.AccessKeyId | xargs)” >> $BASH_ENV; source $BASH_ENV;
echo “export AWS_SECRET_ACCESS_KEY=$(echo $temp_role | jq .Credentials.SecretAccessKey | xargs)” >> $BASH_ENV; source $BASH_ENV;
echo “export AWS_SESSION_TOKEN=$(echo $temp_role | jq .Credentials.SessionToken | xargs)” >> $BASH_ENV; source $BASH_ENV;
- aws-ecs/update-service:
cluster-name: “ngc-cluster-prod”
container-image-name-updates: “container=ingest-prod,tag=latest”
family: “ingest-prod”
workflows:
############
STAGING
############
stg-ingest-build-deploy:
jobs:
- build-ingest:
context:
- catalog-mvp-staging
filters:
branches:
only:
- main
- pytest:
requires:
- build-ingest
filters:
branches:
only:
- main
- yapf-ingest:
requires:
- build-ingest
filters:
branches:
only:
- main
- airflow-deploy:
context:
- catalog-mvp-staging
requires:
- build-ingest
- pytest
- yapf-ingest
filters:
branches:
only:
- main
- aws-ecr/build-and-push-image:
context:
- catalog-mvp-staging
requires:
- build-ingest
- yapf-ingest
executor: build_docker
path: /home/circleci/project
repo: “ingest-staging”
tag: “latest,${CIRCLE_SHA1}”
filters:
branches:
only:
- main
- aws-ecs/deploy-service-update:
context:
- catalog-mvp-staging
requires:
- build-ingest
- aws-ecr/build-and-push-image # only run this job once aws-ecr/build-and-push-image has completed
family: “ingest-staging”
cluster-name: “ngc-cluster-staging”
container-image-name-updates: “container=ingest-staging,tag=latest”
filters:
branches:
only:
- main
OK, my first guess is that the image ubuntu-2004:202010-01 that you are using as your base environment is now so old that repos have started to drop support. You should try a later release from the options available
Ubuntu 20.04 is an LTS release that has a 5-year standard support cycle from its original release of April 2020, but I am unsure if that means an image of 20.02 from October 2020 is considered supported or not when it comes from doing something like an apt-update, which is likely to be called as you install aws tools. To be certain of this you would need to post your log outputs, but if an apt-update is taking place you will be trying to update the environment to the 2023.07.1 release anyway, but at your cost in terms of the CI system taking time to do this step every time the workflow is run (and so incurring extra fees).
Over the life of the 20.04 there have in fact been 6 releases as the full release name for the Ubuntu supplied image is 20.04.6 at some point public keys may have been updated as part of a release.
This issue gone; now issue looks different
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [24 lines of output]
/bin/sh: 1: pkg-config: not found
/bin/sh: 1: pkg-config: not found
Trying pkg-config --exists mysqlclient
Command ‘pkg-config --exists mysqlclient’ returned non-zero exit status 127.
Trying pkg-config --exists mariadb
Command ‘pkg-config --exists mariadb’ returned non-zero exit status 127.
Traceback (most recent call last):
File “/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py”, line 353, in
main()
File “/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py”, line 335, in main
json_out[‘return_val’] = hook(**hook_input[‘kwargs’])
File “/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py”, line 118, in get_requires_for_build_wheel
return hook(config_settings)
File “/tmp/pip-build-env-_bnuyw7m/overlay/lib/python3.10/site-packages/setuptools/build_meta.py”, line 355, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[‘wheel’])
File “/tmp/pip-build-env-_bnuyw7m/overlay/lib/python3.10/site-packages/setuptools/build_meta.py”, line 325, in _get_build_requires
self.run_setup()
File “/tmp/pip-build-env-_bnuyw7m/overlay/lib/python3.10/site-packages/setuptools/build_meta.py”, line 341, in run_setup
exec(code, locals())
File “”, line 154, in
File “”, line 48, in get_config_posix
File “”, line 27, in find_package_name
Exception: Can not find valid pkg-config name.
Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
[notice] A new release of pip is available: 23.0.1 → 23.2.1
[notice] To update, run: pip install --upgrade pip
The command ‘/bin/sh -c pip install --user -r requirements.txt && pip install awscli’ returned a non-zero code: 1
Exited with code exit status 1
That looks much like the issue here
you reference python 3.x a lot in the script but are then using the command pip, which will relate to the Ubuntu-installed python 2.x environment. The command pip3 exists for working with python 3.x
The issue here is more about your deployment and use of Ubuntu than the CircleCI tool set, so the amount of help I can provide will be limited as I do not know python. You may want to simplify your build script and debug the process step by step.