I’m seeing some very odd behavior in circle builds when trying to use the Kubernetes minikube project Minikube is a docker-based project which allows you to run a single node kubernetes cluster. The install I am performing works perfectly if I do it all in a single build step, but seems to fail as I break it out into distinct build steps. I’ve tried to slim this example down as much as possible from a more complex repo to show the issue.
Here’s what each stanza in the current config.yam is doing at a high level:
- NORMAL Circle VM setup - tools, docker, golang, minikube
This step just sets up prerequisites such as docker, nsenter, etc.
- start minikube
This step starts minikube. Minikube reports as started correctly. I can also interact with the API using the kubectl binary. I then add a deployment and waitloop until the deployment starts. So far so good, everything is still responding as expected. To ensure it’s not just a timing thing, I then add a 120-second sleep and then ensure minikube is still up and operational, and that I can still get a list of running pods. Everything works exactly as expected.
- check status
All I do here is run a minikube status and a kubectl get po. In both cases, they fail. Somehow in between the run stanzas, the minikube service has stopped. From previous testing, it does look like docker containers are still running.
Is there anything I should be aware of that would cause the state to change between the various run stanzas? Does circle modify processes, environment, or the filesystem between these stanzas? I’ve tried multiple flavors of this pattern, and consistently things work perfectly in a single run stanza but break in this manner as soon as I break things out.
Any insight into why this might be happenings would be greatly appreciated.
p.s. I would have included much more detail and supporting links in this bug report but I got hit by the two link limit.