Aws/eks orb not authenticating

Hi, I have a simple job utilizing the aws-eks: circleci/aws-eks@1.1.0 orb, almost directly from the example in the docs.

All the setup steps work as expected, until I call any kubectl command when I get an error: You must be logged in to the server (Unauthorized).

I did a little research and most people who receive this error get it when they haven’t used the job to build the cluster. I’m in the same situation, where I need to update deployments in a previously built cluster. I feel like I’m missing a fundamental setup piece, but nothing else is specified in the docs. I’m thinking that I need to create an iam with the correct access for this to work correctly, but it’s missing.

Anyone have any ideas?

Relevant pieces of my config.yml:

version: 2.1

  aws-ecr: circleci/aws-ecr@7.0.0
  aws-eks: circleci/aws-eks@1.1.0
  kubernetes: circleci/kubernetes@0.4.0

  # Run db migrations...
    executor: aws-eks/python3
        type: string

      - checkout

      - aws-eks/update-kubeconfig-with-authenticator:
          aws-region: ${AWS_REGION}
          cluster-name: my-cluster
          install-kubectl: true

      - run: 
          name: "Remove previous job..."
          command: kubectl delete -f << parameters.job_yml >> --ignore-not-found

      - run: 
          name: "Run migrations..."
          command: kubectl apply -f << parameters.job_yml >>

This is solved by backing out the eks and kubernetes orbs, using a linux machine, and configuring per aws instructions:

      - run:
          name: Provision instance
          command: |
              sudo apt-get update -y
              sudo apt-get install awscli groff -y
              pip install awscli
              sudo curl -o kubectl
              sudo chmod +x ./kubectl
              mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
              echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
              aws configure set default.region ${AWS_DEFAULT_REGION}
              aws configure set aws_access_key_id ${AWS_ACCESS_KEY_ID}
              aws configure set aws_secret_access_key ${AWS_SECRET_ACCESS_KEY}
              aws eks --region ${AWS_DEFAULT_REGION} update-kubeconfig --name my-existing-cluster

The orbs seem nice, but only worked for me on the narrowest of happy paths.

Upon further review…

This was actually caused by our utility user not being setup in the aws-auth kubernetes configMap. For some reason the utility user didn’t need to be added there for kubectl access with Jenkins. Might be due to our CI box being in the same VPC as the kube cluster.

The original orb code worked without needing to manually deploy

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.