I have a CircleCI user and an associated role setup in AWS to deploy to the ECR and EKS. The images are being pushed to ECR just fine. Somewhat related, but the same role and user are used to access services in AWS when running containers that need access to staging DBs for testing, and everything runs fine.
However, when I try to access EKS in any capacity- it fails. The Role has a permissions policy associated with it with all permissions for the staging cluster. I can verify that the kubeconfig file is reflecting the correct information, but no commands from the CI container is accepted. It fails with this error:
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Exited with code exit status 1
I have tried both the EKS and the Kubernetes orbs directly to no avail. Finally, I am trying
deploy-to-staging:
docker:
- image: cimg/aws:2024.03
parameters:
service:
type: string
steps:
- checkout
- run:
name: Create deployment manifest
command: |
BUILD_DATE=$(date '+%Y%m%d%H%M%S')
cat backend/kubernetes/staging/backend/deployment.yaml.template |\
sed "s|CIRCLECI_SHA|$CIRCLE_SHA1|\
g;s|BUILD_DATE_VALUE|$BUILD_DATE|g;s|VERSION_INFO_VALUE|$CIRCLE_SHA1|g" > << parameters.service >>-deployment.yaml
- aws-eks/update-kubeconfig-with-authenticator:
cluster-name: prod
install-kubectl: true
aws-region: $AWS_DEFAULT_REGION
- kubernetes/create-or-update-resource:
resource-file-path: "<< parameters.service >>-deployment.yaml"
get-rollout-status: true
resource-name: << parameters.service >>
namespace: staging