My tests consist in a series of independent test commands:
test:
override:
- eslint --ext=.js --ext=.jsx frontend
- karma start frontend/config/karma.circleci.js --single-run
- flake8
- isort --check-only --diff --recursive
- pytest -m 'not e2e' -p no:sugar --cov=backend --junit-xml=$CIRCLE_TEST_REPORTS/backend/junit.xml
- pytest -m 'e2e and not no_ci' -p no:sugar --junit-xml=$CIRCLE_TEST_REPORTS/end-to-end/junit.xml --splinter-webdriver=chrome
The easiest way to parallelize them would be to have each container run some of these commands, until they’ve all been run.
For example container 0 would run:
- eslint --ext=.js --ext=.jsx frontend
- flake8
- pytest -m 'not e2e' -p no:sugar --cov=backend --junit-xml=$CIRCLE_TEST_REPORTS/backend/junit.xml
And container 1 would run:
- karma start frontend/config/karma.circleci.js --single-run
- isort --check-only --diff --recursive
- pytest -m 'e2e and not no_ci' -p no:sugar --junit-xml=$CIRCLE_TEST_REPORTS/end-to-end/junit.xml --splinter-webdriver=chrome
Is this possible? I couldn’t find a reference to that behavior in the docs. I suppose it may be difficult because containers run independently and there’s no mechanism for distributing tests?
I can work around this with a bash script as described in the docs. That feels a bit messy, though.
Note that in my case, pytest’s start-up time is large (due to Django’s migrations, mostly), so running it multiple times on different files won’t file.