SSH when using self hosted runner on k8s

how to enable ssh when using self hosted runner on k8s ?

I’m getting this error

Failed to enable SSH: Oops - looks like we misconfigured our instances and ssh isn’t enabled properly for this build. If the problem persists, please contact our support.

That looks like a very unhelpful ‘general’ error.

When you say enable ssh, in what way?

One of the stated limitations is “The ability to rerun a job with SSH”

This does make sense as while you are running the circleci agent within your defined environment it has no control over the environment it is being run in, so no way to allow remote ssh connections into the container.

One thing to note is that the SSH debugging feature is an embedded SSH server within the agent (for a normal runner), so wants a fixed dedicated nonstandard port to be made available. This is OK for their dedicated systems, but something I have left disabled for my VM-based runners as my runners do not have public IP addresses and so can not accept an external connection.

1 Like

is there a guide for this? to enable this on k8s ?

just kubectl ssh inside the pod ?

I think the point is that it’s a known limitation the “build with ssh” UI option won’t work with self-hosted executors, so you’d get remote access to that host / container however you normally would vs. using that feature. From a security standpoint, this is also probably the better / safer way, though I do wish CircleCI would not show the UI option to re-run the job with ssh when the job has used a self-hosted executor, since it will let you try to do it.

In the case of a kubernetes hosted runner, I wouldn’t ssh to it, but rather exec into the container if you need to for some reason (kubectl exec -it -n yournamespace -n pod-id -- /bin/bash or similar).

Hey all, SSH reruns for container runner is now in open preview.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.