I am using a machine instead of Docker images (a few reasons for this) and I would like to have a single step that does all of the initial configuration of the machine and then have separate jobs use the existing, running instance of the machine. As far as I can tell, each job requires a machine (or docker) configuration that resets the environment. I can imagine ways to work around this that are purely Docker based, but again, there are a few reasons the machine is strongly preferred.
Thanks for the workspaces suggestion… I looked at that but from the documentation it looked like the workspace command packaged the files I want to save and then unpacks them for the next job. I would pretty much need to set the entire machine (the root / directory), as a big part of what I want is the Debian packages that were installed. Let me know if I misunderstood how workspaces functions.
So my desired workflow is getting a machine into the correct configuration and then running every following job using the machine in that state, no resets in between jobs.
I have effectively accomplished what I want by having a single job with a bunch of steps that simply do everything in need in that single job, although I would prefer to break it into several jobs.
In that case, I’d recommend creating a separate repo to build a Docker image that installs everything you need, and then you can use this as the build image for several jobs across your primary project (using the Docker executor). However if you have to use a Machine executor, then this ideal approach is not open to you - can you explain why you believe Machine is essential?
I would describe it more as “strongly desired” more than “essential”. The machine setup works and is already compatible with the pre-existing pipeline, so it would save work to not have to change a working environment to meet the CI requirements. The docker environment also relies on docker-compose, and from my initial tests with docker-in-docker, it’s a horrifying experience akin to the insanity mentioned in the movie Inception.
So, yes… everything could be restructured to fit better into containers, but it creates a lot more work and makes CircleCI a lot less desirable (if that work has to be done), since there is something that works now.
That said, as I mentioned, running CircleCI as a single job with all of what would have been jobs being handled as steps works fine, and makes CircleCI pretty convenient.
Each step runs on a different machine, and there is no way to copy the full state across workflows. In addition, trying to move that much state via workspace could very well be as slow as recreating it, if not worse.
I can create a feature request, but I’m not sure how it would work so could not begin to give a timeline or possibility. If I may ask when you say pre-existing pipeline, are you coming from another service that has this ability, or is this currently local?
@drazisil, unless you have heard a similar request from others, I can’t imagine this is worth a feature request. I have a workaround that accomplishes what I need, it just limits my ability to use some nice to have features of CircleCI. And agreed, the way to implement this would not be to copy full machine state from job to job, it would be to tell the next job that a machine is already in the correct state for it.
And this is not migrating from a different service, it is coming from a local pipeline and migrating to CircleCI. So long as the machine type is available, migrating any existing environment seems pretty simple. Hopefully we will invest some time in containerizing the rest of the CI process, but until then CircleCI is working fine.
Great to hear! No, this is the first time I’ve heard this request and as a single customer request I agree it’s unlikely to get traction. Very glad you do have a way that works.
It may be worth noting that the Machine executors may change in price in the future, at least according to the docs. They have been offered at no additional charge for the last 18 months (or perhaps longer) but I’ve always seen them as secondary to Docker - only use if you have to.
I hope machine executors continue to be available… as mentioned, having a machine pretty much solves most needs that are not fully containerized and allows for anybody to use CircleCI. While docker may be the ideal solution, a lot of places have pre-existing pipelines, so eliminating any barrier to using CircleCI will help onboard people.