[Product Launch] Rerun Failed Tests (circleci tests run)

Thanks @sebastian-lerner. I had a workaround figured out but what you are proposing looks cleaner. I will test it out.

I think I noticed another possible issue - if the tests fail and then I ‘rerun failed only’ the only store_test_results that will be saved will be for the tests that we just re-ran. On the next full run all the other tests complain that there is ‘No timing found for test’. This will create suboptimal time based test parallelization on the run that’s right after ‘rerun failed only’.
I don’t know how this works under the hood but possible solutions would be to merge ‘rerun failed’ with a previous full run results or even just ignore rerun timing and use previous successful timing since it’s likely that it’s more accurate.

Yup, that is one of the “known limitations”. For the users that have currently adopted the feature, it hasn’t been a drastic impact, so we have not invested time yet in how we might fix that problem. But it’s something on our radar for sure.

Hi! I noticed that when running Jest, the circleci tests run command uses file names the first run and test names on the Re-run Failed Tests only run.

First run logs:

INFO[2023-06-08T20:36:33Z] starting execution                           
DEBU[2023-06-08T20:36:33Z] received 150 test names: /home/circleci/project/app/webpack/javascripts/job_posts/form.spec.tsx /home/circleci/project/app/webpack/javascripts/data_migrations/models/multipart_upload_manager.spec.ts
...

Second run (Re-run failed tests only)

DEBU[2023-06-08T20:56:25Z] 1 test(s) failed out of 3398 total tests. Rerunning 1 test file(s) 
No timing found for "Attendees interviewerAdded fetches freebusy information for normal interviews"
INFO[2023-06-08T20:56:25Z] starting execution                           
DEBU[2023-06-08T20:56:25Z] received 1 test names: Attendees interviewerAdded fetches freebusy information for normal interviews

The configuration for this job is:

npx jest --listTests | circleci tests run \
              --command="xargs npx jest -w=6 --workerIdleMemoryLimit='700MB' --reporters='default' --reporters='jest-junit' --testPathIgnorePatterns $(echo $SKIP_TEST_FILES | sed 's/,/ /g') --" \
              --verbose --split-by=timings

Any advice here? Thanks in advance!

You may be missing a file attribute on your JUnit XML output. On the re-run, circleci tests run is parsing the original job run’s JUnit XML and looking for that file attribute. Do you have

      JEST_JUNIT_ADD_FILE_ATTRIBUTE: true

defined somewhere? Similar to how we define it in the docs: Re-run failed tests only (preview) - CircleCI

Thanks for your reply. I do have that set as an environment variable, but same result:

  - &run_jest
      run:
        name: Run jest tests (webpack)
        command: |
          npx jest --listTests | circleci tests run \
              --command="xargs npx jest -w=6 --workerIdleMemoryLimit='700MB' --" \
              --verbose --split-by=timings
        environment:
          JEST_JUNIT_OUTPUT_DIR: ./test-results/
          JEST_JUNIT_ADD_FILE_ATTRIBUTE: true

@kbruccoleri Can you send me a build link to sebastian @ circleci.com?

And if possible, can you upload the test results via an artifact in addition to the store_test_results step? So that I can take a look at that test results XML? Storing Build Artifacts - CircleCI

We could 100% use this feature today in our organization. I didn’t realize it was a closed preview until just now. I have the configuration set up as outlined in the instructions and would love to test out this feature. Can we be added as a part of the closed beta?

@joem86 We are right on the cusp of opening this up to all users (hopefully by EOD tomorrow or early next week), any concerns with waiting for that?

Also happy to add your org immediately if you want to test things out sooner, I just need to make and deploy a code change. If you want to try it out sooner, email me your CircleCI organization name to sebastian @ circleci.com

We’re excited to announce this functionality is now in Open Preview and available to any CircleCI user. Please reach out if you run into any questions/concerns while setting this up.

1 Like

Anyone has an example using Jest --selectProjects option?

Anybody have an example of how to use this with sbt? I seem to need to split my sbt test command and my save tests command in order to successfully store_test_results. Also when I actually re-run just a failed test, all my tests are run. Here is how I currently have everything set up:

      - run:
          name: sbt test
          command: |
            TEST_FILES=$(circleci tests glob "**/*Spec.scala")
            echo "$TEST_FILES" | circleci tests run --command="xargs sbt -mem 12288 -Dsbt.boot.lock=false test" --verbose
      - run:
          name: save tests
          when: always
          command: |
            mkdir test-results
            find ./modules/ -type d -name "test-reports" -exec cp -R --parents "{}" ./test-results/ \;

What’s interesting is that on re-run I see the one test that is failing logged out as “Received: FooBarSpec.scala”, so I know my re-run has access to the test that needs to be re-run, but I can’t seem to just run that one test. Any ideas?

Can’t figure out how to search this thread for my question. I would like to fail a test on the main run, and succeed it on the re-run so that I can test this feature. Is there anything in the environment that I can check to tell if the code is being run by this feature?

@nroose Detect Rerun of a job is a way that we’ve been able to identify that a job is a “re-run” with our internal testing. There unfortunately is no other environment variable. Let me know if I’m misunderstanding your question.

@orenshafir Can you possibly send a link to one of your jobs to sebastian @ circleci.com so I can take a look? And can you also make sure that you’re uploading the test results as an artifact so we can inspect those? Re-run failed tests only (preview) - CircleCI

I’ve seen some other users running Scala tests have success using the method described here: Re-run failed tests only (preview) - CircleCI as opposed to calling sbt from the --command parameter directly.

@gillesbouvier-qz I haven’t seen --selectProjects with jest, but I’ve seen some users have success with this feature while using what I think is something similar: --project with playwright (Command line | Playwright).

Are you seeing an issue with --selectProjects with jest?

I figured out I can have a project env variable that I can change between the main run and the re-run.

1 Like

Nice feature! I managed to setup correctly and I can re-run only the failed tests. The problem is that I have many “workers” running tests in parallel, and when the worker doesn’t have any test to run (from the run test failed only), it’s going to fail in the persist_to_workspace step:
“The specified paths did not match any files in /root/project”

Any idea on how to solve it?

      - persist_to_workspace:
          root: .
          paths:
            - coverage/e2e
            - artifacts/playwright/report*
            - playwright-reports

Thanks!

@dmelo7

Not the most elegant solution, but I tried it out and it seems to work just fine:

Set up the directories (coverage/e2e, artifacts/playwright/report/, playwright-reports) as an initial step in your job.

Then when the job runs, if it’s a parallel run that is running tests, those directories will get written with contents and persisted to the workspace just fine. If it’s a parallel run that is not running tests, your persist_to_workspace step should not fail because the directories are present.

Something like:

steps:
      - checkout
      - run: mkdir no_files_here
      - run: #test command with circleci tests run
      - store_test_results:
          path: ./test-results
      - store_artifacts:
          path: ./test-results 
      - persist_to_workspace:
          root: . 
          paths:
            - no_files_here

In the example above, no_files_here is the equivalent of your coverage/e2e, etc. directories.

1 Like

Hey Sebastian, thank you for the response, that was actually really helpful! The only blocker I’m having at the moment is that this command TEST_FILES=$(circleci tests glob “**/*Spec.scala”) is not always guaranteed to grab me all files that should be tested, since not all files run by sbt test end in Spec in some of our services. Do you know of another way to grab the full file path for files that should be tested?

I am trying to make use of this new feature by following the steps in the Re-run failed tests only blog post and I believe I have reconfigured my config.yml appropriately. My tests are passing just fine, however afterwards I am getting an exit code of 127 after tests are finished running saying “not found” for each file that contained tests. I suspected this was because my test runner (django) expects test files to be dot-separated but circleci tests expects slash-separated, so I tried to provide slash-separated file names to circleci tests, but my test command now failed saying the arguments it received were invalid even though the file names passed in --command were dot-separated