I am using the “output-test-files-only” workflow in the docs, but instead of the circleci tests glob "src/**/*js" equivalent of circleci tests glob "*Spec.scala", one of the engineers at my company came up with a script to grab all files that will be tested when sbt test is run (since it’s not always guaranteed that testable files end in Spec).
The script itself is a little too specific to our scala services so I won’t share that here, but once you’re able to pipe all those tests into the circleci tests run --command ">files.txt xargs echo" part, on the next step I utilize the suggestion here (sebastian referenced it earlier but I’m having trouble linking it):
You could probably use these environment variables:
CIRCLE_WORKFLOW_ID
CIRCLE_WORKFLOW_WORKSPACE_ID
If you check the “Preparing environment variables” step, you can see that the values of these variables are the same when the job runs first. When you rerun, workspace id stays the same, but the workflow id changes. An example from my test build
First run
CIRCLE_WORKFLOW_ID=ab3435ca-de56-4081-9a52-448215b46c9a
CIRCLE_WORKFLOW_WORKSPACE_ID=ab3435ca-de56-4081-9a52-448215b46c9a
Second run
CIRCLE_WORKFLOW_ID=62ca5e84-dbae-4341-bb99-6006962bf956
CIRCLE_WORKFLOW_WORKSPACE_ID=ab3435ca-de56-4081-9a52-448215b46c9a
to conditionally run a full sbt test the first time the CI process is happening, and then run sbt testOnly {{failed test names here}} on a re-run of a failed test.
Thanks @sebastian-lerner. I’ve been trying to visualise that solution but I think my current setup might not work with it.
Here’s a snippet of my config.yml where the tests are run
- run:
name: build and start containers then run automated tests
command: |
set -a
TEST_ENV=${CIRCLE_BRANCH};
docker-compose -f docker-compose-acceptance-test.yml up --exit-code-from acceptance --build
The docker-compose-acceptance-test.yml file then subsequently calls the shell script. Here’s a snippet of the docker-compose-acceptance-test.yml
command: /start.sh $TEST_ENV
The start.sh contains the actual pytest command that runs the tests. Here’s a snippet of the start.sh
Yes @sebastian-lerner. After following the documentation, my shell script looks like this:
#Downloading circleci-cli
curl -fLSs https://raw.githubusercontent.com/CircleCI-Public/circleci-cli/master/install.sh | DESTDIR=/usr/local/bin bash
#Running the tests
circleci tests run --command="xargs python3 -m pytest acceptance_tests --test_env $TEST_ENV -n 6"
However, when the build starts, the circleci logs show the following error:
Error: Please ensure that circleci-agent is installed, expected this to be called inside a job: exec: "circleci-agent": executable file not found in $PATH
@Larry If you get rid of the curl command to download the circleci-cli, does the same error occur?
One more note for your invocation of circleci tests run, you will need to pass in a list of test filenames via stdin. Similar to the docs where it passes in $TEST_FILES to the circleci tests run command. Can you also please add a --verbose flag after your --command?
Could you send me an email to sebastian @ circleci.com with a job URL that does not do the CLI download via curl so I can take a closer look?
@Larry email with job URL would be great please. I need to chat with my team about this, that is strange. The CLI should be there regardless of whether it’s invoked within a shell script or not.
Just wondering if you have any updates/recommendations around code coverage uploading.
We have an issue where we are doing coverage over multiple parallel runs. The “upload coverage” task that runs has a “require:” on the test job (This is needed as the upload coverage job needs the results from all the different test instances). Which means on the full run no code coverage gets uploaded when the tests fail.
If we do the partial re-run, then the code-coverage would still be missing all the other tests from the parallel instance that failed.
One idea we had was to make sure that the “upload-coverage” always runs even if the tests fail (this should still give us an accurate(ish) coverage report, but currently we cant find a way to run a job after another job regardless of its outcome.
@bolinkd I’m working with my team on putting together an example for how to handle a code coverage use case. We hope to have something in the next week or so that I can share. Thank you for the patience, totally fair feedback!
We have similar use case, but instead of uploading code coverage, we’re uploading test results to s3 for long term archival.
Each type of test (rspec, cypress, jest) runs as separate CI jobs, which would then persist their JUNIT xml results as test result and to the workspace… Once all tests succeeded, we have a “reporting” job which extract test results from all previous CI jobs to combine and upload to s3.
Similarly as mentioned above, we have issues with incomplete partial results when some test failed and gets re-run.
Unfortunately we’re subject to the “known limitation” of spinning up all N parallel workers to run tests. Our specific issue is actually that a setup step is flakey, not necessarily the test itself. So this feature doesn’t really move the needle for us.
It would be great if the re-running job could be set to the parallel runs that had failed previously. In theory there should be enough parallel runs to accommodate for the failed tests. For example, we have parallelism set to 50 parallel runs, but only 2 failed. We’d like to re-run failed tests using only 2 parallel runs.
@joem86 We added recently to the docs an example for how to stop execution if there are no tests to be run on that worker. Could doing that as the very first step help you out? and then doing the setup → running the tests later on.
I recognize it doesn’t prevent the spin up of the extra workers, but it should solve that problem of a flaky set-up step in one of those workers if you make that check the very first thing you do.
@sebastian-lerner In case of a rerun, would $CIRCLE_BUILD_NUM refer to the current rerun or the previously failed run? Would we be able to pull the artifacts from both jobs and combine them (filtering for only successful reports)?
The attached image illustrate how we previously structure our workflows. Since the introduction of “Re-run failed tests”, we have simplify it to have only 3 cypress run (chrome, electron, ldap), and 1 rspec run.
You can see that we have a “reports” step which are run after all of the previous jobs completed successfully. This job is responsible for collecting the test results from all previous jobs and build up some sort of coverage and traceability report; which then gets uploaded to s3.
For the reports to be complete, it is crucial that we have test results from all of the jobs, including those that failed from flaky specs and from the rerun of said jobs.
This is a great feature, that I’ve had no trouble setting up for Jest, but I was wondering if anyone had a working example of using this in conjunction with Cypress, and specifically with the --record CLI flag. I’m so far been struggling to link up the builds across machines because the combination of --parallel, --group and --spec don’t play well together
@benedfit To confirm, the docs for Cypress aren’t sufficient for you right? Re-run failed tests - CircleCI. I’m assuming no because those don’t have the --group or --parallel flags
That is correct, I can get things working if I don’t use the --parallel or --group flags, but then I end up with multiple runs in the Cypress Cloud dashboard that aren’t linked.
The error returned by Cypress Cloud is as follows:
In order to run in parallel mode each machine must send identical environment parameters such as:
- specs
- osName
- osVersion
- browserName
- browserVersion (major)
This machine sent the following parameters:
{
"osName": "linux",
"osVersion": "Debian - ",
"browserName": "Chrome",
"browserVersion": "107.0.5304.121",
"differentSpecs": {
"added": [
"cypress/e2e/claim.spec.ts",
"cypress/e2e/organization/billing.spec.ts",
"cypress/e2e/site/deploys.spec.ts",
"cypress/e2e/site/overview/enterprise.spec.ts",
"cypress/e2e/team/builds.spec.ts",
"cypress/e2e/workflow-ui/log-drain-settings.spec.ts"
],
"missing": [
"cypress/e2e/navigation/logged-in/collaborator-specific-sites.spec.ts",
"cypress/e2e/organization/overview.spec.ts",
"cypress/e2e/site/functions.spec.ts",
"cypress/e2e/site/overview/no-deploys.spec.ts",
"cypress/e2e/team/overview.spec.ts"
]
}
}
@benedfit Thanks, sorry one more question, is this a net-new job that is using those Cypress flags & circleci tests run? Or do you have an example of a job that is already using those flags but not using circleci tests run?
If you have an existing example, any chance you could send it so I could take a look at sebastian @ circleci.com?
@ignatiusreza For a rerun, I would expect $CIRCLE_BUILD_NUM to be unique to the rerun.
If you look at the last bullet under in this docs section, we’ve added in a snippet for how you can set up your config.yml to make sure that on a re-run, you are also passing along anything you want from the original job run in addition to anything generated from a re-run. So in your case, test results. In other cases, code coverage reports.
Based on the image you sent, I think this kind of pattern matches what you’re looking for?