We’re excited to announce the add_ssh_keys feature is now fully supported on self-hosted runner jobs.
Use the feature to configure additional SSH keys for running processes on other services during job execution.
Comment below with any questions or concerns.
Thanks, but for any self-hosted runner that is built on a system instance rather than a container instance there needs to be a remove-ssh-keys command to remove a deployed key after use.
The reason for this is that unlike all container-based runners and the runners provided by circleci a self-hosted runner is likely to be a long-lived environment. So having many different jobs run could result in the built-up of many different private keys on the instance. So not only will the keys be accessible between jobs, but also the support team who maintains the runners or any backup system will have long-term access to the key.
My personal solution for this issue can be seen in the following code block
steps:
- run:
name: Reload container on target system
command: |
echo "$SECURITY_SSH_CIRCLECI_KEY" > private_key
chmod 600 private_key
ssh -o StrictHostKeyChecking=no -i private_key -o ConnectTimeout=10 circleci@<<parameters.target_system>> reload-backend
ssh -o StrictHostKeyChecking=no -i private_key -o ConnectTimeout=10 circleci@<<parameters.target_system>> display-system-stats
rm private_key
So I
- place the key in a known file name
- use the key
- delete the key, within the same run instance as this removes the risk of the script being cancelled via the GUI before a delete takes place.
A future upgrade will take place when I next create my base runner instances. I will include a RAM drive for performance and so that files like this do not get persisted to physical storage.
1 Like
Great point, @rit1010. We’ve been looking internally at how we may be able to solve this particular use case for end users. We don’t have anything immediate to share but it’s being actively thought through. I’ll update on Discuss when we have more.
1 Like