I have two workflows that trigger during an initial github pull request. Each workflow has two sequential jobs. I would have expected that a workflow would complete before the next started. Instead I had one job execute from each workflow first. How do I make workflows sequential?
To give a detailed answer it would help if you posted your config.yml file, but I can make some general comments.
The details of how workflows and jobs defined with a workflow are executed can be found here
https://circleci.com/docs/workflows
From what you have posted I am guessing that your current configuration looks something like (excluding any parameters you pass around)
Workflows:
version: 2
xperiflow_testing_suit:
jobs:
- reset_env:
- test_suite:
xperiflow_dev_test_suit:
jobs:
- reset_env:
- test_suite:
The issue is that each workflow will “Run independently”, so the only limit to how many workflows will start at once is the number workflows you define and the number of runners you have available.
The same is also true for jobs, except that jobs can use the “requires:” statement to only allow a job to run after the completion of another job within the same workflow.
From what you have posted it seems that you have a single workflow that needs to execute a single job that has the 4 current tasks you have defined, with the jobs using the “requires:” statement to only run once the previous job has been completed. so something like the following
One complication is that I am not sure “requires:” can handle the fact that you are calling reset_env and test_suite twice. So you may want to change the structure so that each of your current workflows become the jobs and the current jobs just become commands executed by the job. This way reset_env and test_suite just become code blocks that you can use within the job you wish to run.
My Full config.yml is below. I use Github Actions to trigger runs as I found that gives me better granularity with triggers than native CircleCI. I also have a single windows self hosted runner with one instance of a Circle CI agent. I admit this is my test set up so I wouldn’t trigger two workflows based off one Github Action in production. However, given the single runner I would have thought the default behavior would be complete a full workflow before starting another.
Let mw know
# Use the latest 2.1 version of CircleCI pipeline process engine.
# See: https://circleci.com/docs/2.0/configuration-reference
version: 2.1
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
parameters:
GHA_Actor:
type: string
default: ""
GHA_Action:
type: string
default: ""
GHA_Event:
type: string
default: ""
GHA_Meta:
type: string
default: ""
jobs:
reset_env:
machine: true
resource_class: repo/xperiflow_build
working_directory: C:\repo\xperiflow
shell: powershell.exe
steps:
- checkout
- run:
command: |
Write-Host << pipeline.parameters.GHA_Action >>
Write-Host << pipeline.parameters.GHA_Actor >>
Write-Host << pipeline.parameters.GHA_Event >>
Write-Host << pipeline.parameters.GHA_Meta >>
- run:
name: killing python processes
command: .\environment\circleci\kill_python_processes.ps1
- run:
name: setting gitignored files
command: .\environment\circleci\set_gitignored_files.ps1
- run:
name: setting up framework database and deleting old databases
command: |
conda run -n pyxperiflow python .\environment\circleci\reset_framework.py
- run:
command: echo "Hi"
test_suite:
machine: true
resource_class: repo/xperiflow_build
working_directory: C:\repo\xperiflow
shell: powershell.exe
steps:
- checkout
- run:
name: setting up conda
command: |
conda init
refreshenv
- run:
name: running pytest
no_output_timeout: 45m
command: |
.\environment\circleci\create_results_dir.ps1
conda activate pyxperiflow
conda env list
pytest .\xperiflow\source\tests\conduit\servers -rA --junitxml=C:\repo\test_results\junit.xml --cov=.\xperiflow\source\tests\conduit\servers --cov-report=xml:C:\repo\cov_results
- run:
command: Get-Process
- run:
name: killing python processes
command: .\environment\circleci\kill_python_processes.ps1
- store_artifacts:
path: C:\repo\cov_results
- store_test_results:
path: C:\repo\test_results
dev_test_suite:
machine: true
resource_class: repo/xperiflow_build
working_directory: C:\repo\xperiflow
shell: powershell.exe
steps:
- checkout
- run:
name: setting up conda
command: |
conda init
refreshenv
- run:
name: running pytest
no_output_timeout: 45m
command: |
.\environment\circleci\create_results_dir.ps1
conda activate pyxperiflow
conda env list
pytest .\xperiflow\source\tests\ -rA --junitxml=C:\repo\test_results\junit.xml --ignore=auxilary --ignore=app\routines\retry --ignore=app\routines\reversion
- run:
command: Get-Process
- run:
name: killing python processes
command: .\environment\circleci\kill_python_processes.ps1
- store_test_results:
path: C:\repo\test_results
master_test_suite:
machine: true
resource_class: repo/xperiflow_build
working_directory: C:\repo\xperiflow
shell: powershell.exe
steps:
- checkout
- run:
name: setting up conda
command: |
conda init
refreshenv
- run:
name: running pytest
no_output_timeout: 45m
command: |
.\environment\circleci\create_results_dir.ps1
conda activate pyxperiflow
conda env list
pytest .\xperiflow\source\tests\ -rA --junitxml=C:\repo\test_results\junit.xml --cov=C:\repo\xperiflow\ --cov-report=xml:C:\repo\cov_results --ignore=auxilary --ignore=app\routines\retry --ignore=app\routines\reversion
- run:
command: Get-Process
- run:
name: killing python processes
command: .\environment\circleci\kill_python_processes.ps1
- store_artifacts:
path: C:\repo\cov_results
- store_test_results:
path: C:\repo\test_results
workflows:
xperiflow_dev_test_suite:
when:
and:
- equal: [ dev, << pipeline.parameters.GHA_Meta >> ]
- equal: [ pull_request, << pipeline.parameters.GHA_Event >>]
jobs:
- reset_env
- dev_test_suite:
requires:
- reset_env
xperiflow_master_test_suite:
when:
and:
- equal: [ master, << pipeline.parameters.GHA_Meta >> ]
- equal: [ pull_request, << pipeline.parameters.GHA_Event >>]
jobs:
- reset_env
- master_test_suite:
requires:
- reset_env
xperiflow_testing_suite:
when:
and:
- equal: [ dev, << pipeline.parameters.GHA_Meta >> ]
- equal: [ pull_request, << pipeline.parameters.GHA_Event >> ]
jobs:
- reset_env
- test_suite:
requires:
- reset_env
Sorry I missed your reply.
circleci’s model is that of parallel execution, so as you have 3 workflows defined it will try and run all 3 in parallel if the conditions are met. This logic does not take into account the number of runners defined as the runners are more a deployment constraint that is only known at run time - as you will have seen the runner is defined within the job definition, not the workflow definition.
I also run a single self hosted runner environment and so found the parallel execution of workflows and jobs more hassle than they are worth. circleci seems to have addressed this with the introduction of ‘commands’ so you can now have a single workflow and job. The job then calls a number of defined commands in sequence.
Thank you rit1010! I will look into this! I don’t really understand why that would be the case where a workflow wouldn’t finish before another. As a feature I get it chains jobs but if I have the first job that cleans the runner env, I need the rest of the workflow to finish before another starts and reclears env. I might as well just not use workflows and just have the job do all the steps so I know it will execute sequentially.
You can use the statement ‘requires’ to sequence workflows or have a set of workflows wait on the completion of one or more workflows, but in the environment, you have described that seems to be additional configuration work as you have a single runner. The detail can be found here.
https://circleci.com/docs/workflows