Use the same machine executor for multiple workflow to share resource between eachother

Dear CircleCI support,

I am working on an end-to-end pipeline where I would like to have a prepared production environment, after that, I would like to run my end-to-end test on it. I found a basic solution, but it isn’t working.
The reason I would like to do it, we have other tests, and tools that we can

version: 2.1

executors:
  build:
    machine:
      image: ubuntu-2004:202201-02
      resource_class: large
      docker_layer_caching: true
    working_directory: ~/yolo

jobs:
  build-steps:
    executor: build

    steps:
      - add_ssh_keys:
      - checkout

      - run:
          name: Build phase
          command: |
            docker-compose --verbose build foo

      - run:
          name: Lift up foo
          command: docker-compose --verbose --profile foo up -d

      - run:
          name: Solve CircleCi issue, "permission denied, mkdir '/home/circleci/yolo/node_modules/@actions' npm ERR!"
          command: sudo chown -R `whoami` ~/yolo

      - persist_to_workspace:
          root: ./
          paths:
            - ./

  e2e:
    executor: build

    steps:
      - attach_workspace:
          at: ./

      - run:
          name: Install playwright
          command: npm install -D esbuild @playwright/test

      - run:
          name: Install playwright browser dependencies
          command: npx playwright install --with-deps "chromium"

      - run:
          name: Run Playwright tests
          command: DEBUG=pw:api npx playwright test --project="chromium"



workflows:
  version: 2
  e2e:
    jobs:
      - build-steps:
          context:
            - foobar
          filters:
            branches:
              only:
                - master
      - e2e:
          context:
            - foobar
          requires:
            - build-steps
          filters:
            branches:
              only:
                - master


Are you expecting the built and running docker container created in the ‘build-steps’ job to be available to the ‘e2e’ job?

I thought this persist, attach function could do it for me. Am I wrong?

Edit:
Can I do it somehow?

Not wrong, but maybe overly optimistic about how powerful the persistent commands are. They allow files within the file system to be moved between jobs, but are not going to transfer a built and running docker container.

From the example you have provided I would say that you should read up on circleci commands. You could take the steps from each job and turn them into commands, which you would call within a single job, using a single executor.

A more complex workflow based on conditions would need to build the container in one job and then store it somewhere like docker hub so that it can be retrieved for use in other jobs, but this slows the process down and can add extra running costs as you pay egress data fees when storing the image.

The final and even more complex solution, which may be a long term answer is the use of a self-hosted runner. These can be persisted between jobs within a workflow or even whole workflows. So it is possible to deploy a very complex environment using one workflow and then run other workflows within that environment.

I do not want a really complex build system, because It’s hard to maintain. Maybe I would take out the build logic from the docker, so we can use the build without any “hack”. :thinking:

I think If I wrapped out the build phase from the docker I saved a lot of time for myself. :grin:

I saw a trick to move the builder docker to the next phase by copying the whole container, but I think it’s not worth it at all. :cry:

Anyway, thank you for your answer. :blush:

Okay, maybe It’s not a good idea to wrap out the build phase because the docker caching saves a lot of time in the build and prepare phase.

I do agree it would be nice for the same machine executor to be carried over the course of the workflow though.
I appreciate if the workflow jobs fan-out and parallelise, it would be much more harder.
But if the only other option is to create your own self-hosted machine and attach it to the pipeline to run through to completion on the same machine, wouldn’t it make sense if Circle had this option too?

It would be very hard for CircleCI to provide a generalized machine instance that operated as the foundation of many different jobs - the general support issues would be rather large as the state of the system between jobs would be hard to track.

It is very easy for me to define an oversized runner as a VM with 8 cores, 32GB RAM and 160GB Disk as I am using a VMWare server and VMWare can reallocate most of those resources to other defined runners if the runner is running but not active. Such generalization on AWS (which CircleCI uses) would become very costly for the end-user. But in this configuration, all the support issues become my problem.

From the example you have given the best option would just be to build the container within the job that also runs the tests, rather than having a requirement that the independent build job is run.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.