What is the proper way to handle ssh keys for gcloud access from container



I am trying to figure out what is the best way to handle ssh keys on containers for accessing google cloud compute via ssh.

I used the add_ssh_key, using the fingerprint of a key I added to my project’s settings page, and I noticed it appears on the container with the name id_rsa_FINGERPRINT

However, gcloud compute complains there is no public key on the container:

#!/bin/bash -eo pipefail
gcloud compute ssh donatoaz@aospfacta --quiet --ssh-key-file=~/.ssh/id_rsa_FINGERPRINT --zone us-east1-b --command="cd ~/project && git pull"
WARNING: The public SSH key file for gcloud does not exist.
WARNING: Your SSH key files are broken.
private key (OK) [/root/.ssh/id_rsa_FINGERPRINT]
public key (NOT FOUND) [/root/.ssh/id_rsa_FINGERPRINT.pub]
We are going to overwrite all above files.
ERROR: (gcloud.compute.ssh) Aborted by user.
Exited with code 1

Am I getting this wrong? I mean, I can go around this by adding my public key as an environment variable on CircleCI, but it seems weird having to do that.


Donatoaz - it seems like according to the posts I read … try to work with add_ssh_keys in your steps block in your yml config file.


For future’s sake, what I eventually did was to load my public key, base64 encoded, into an ENV VAR, then loaded it up as part of the recipe.

      - add_ssh_keys:
            - "My finger print"
      - run:
          name: Decrypt ssh key via env variable
          command: |
            echo $GCLOUD_SSH_KEY_PUB | base64 --decode --ignore-garbage > ${HOME}/.ssh/id_rsa_FINGER_PRINT_WITHOUT_COLONS.pub;
            chmod 0600 ${HOME}/.ssh/id_rsa_FINGER_PRINT_WITHOUT_COLONS.pub;

After doing this I was able to use gcloud from inside the CI spun container. I am still unsure whether this is the most elegant solution, but it get me to build successfully.


@donatoaz I agree with your acknowledgement of a more elegant solution - Because according to documentation CircleCI provided - they state that if you add a SSH key our VMs / containers I believe should be able to take that info and create the private key associated to what we configured, but we have to do this add_ssh_keys thing? Weird.


This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.