[Product Launch] Rerun Failed Tests (circleci tests run)

Hey, it seems to work nicely, I can re-run my failed Cypress tests, however I see this error in the logs right after Cypress results:

WARN[2023-05-16T12:10:36Z] There was an error when uploading test results from  with batchID 3c53b247-4b13-4b63-ba9d-2f41dca214e3: no path given 
DEBU[2023-05-16T12:10:36Z] Error encountered with test batch 3c53b247-4b13-4b63-ba9d-2f41dca214e3 
INFO[2023-05-16T12:10:36Z] ending execution

What is that about?

Also, I do get this warning from Cypress:

:warning: Warning: It looks like you’re passing --spec a space-separated list of arguments: “xxx yyy zzz”

This will work, but it’s not recommended.

If you are trying to pass multiple arguments, separate them with commas instead:

cypress run --spec arg1,arg2,arg3

The most common cause of this warning is using an unescaped glob pattern. If you are

trying to pass a glob pattern, escape it using quotes:

cypress run --spec “**/*.spec.js”

Maybe it would be nice to have an option e.g. circleci test run --file-delimiter=“,”

Those are warnings and debugging messages that we are in the process of removing to avoid confusion. Sorry about that!

Regarding the delimiter, let me pass that back to the team and see what can we do. Thanks for the feedback!

1 Like

hey @villelahdenvuo I am trying to get this working for our cypress tests but hitting some issues.
We currently use the circleci test split command but when trying to change that to use tests run instead as per docs we get errors. Can't run because no spec files were found.
Do you mind sharing an example of your setup?

@drilon241 if you send over your config.yml to sebastian @ circleci.com I can take a look at well. Were you following the cypress instructions in the docs Re-run failed tests only (preview) - CircleCI?

If you’d be wiling to share it, @villelahdenvuo, can you paste the command you’re using with the circleci tests run tool? (obscure whatever path or project name information you’d like)

I think you could have the result you’re looking for (the list of values sent to the -spec flag comma separated) by using the tr translate tool, like so:

$ echo "arg1 arg2 arg3" | tr ' ' ',' | xargs -I {} cypress run -spec {}

cypress run -spec arg1,arg2,arg3

Removing the initial echo command, and having the contents of the --command flag be --command="tr ' ' ',' | xargs -I {} cypress run -spec {}".

The tool currently expects input to the tool to be space or newline delimited, but how it’s assembled or arranged for the specific test runner you’re using is very flexible, depending on your comfort and familiarity with built-in bash tools.

1 Like

Config sent via email. Thanks for looking into this :pray:
I was following the official docs but maybe I missed something.

I just followed the example on the circleci docs for cypress and it worked.

Thanks, I’ll try your way. It currently works with the spaces, just gives a warning.

I personally don’t love the idea of cramming a complex piped command in a string parameter, though. It doesn’t seem very readable for humans.

This is a great feature. Is there any way to determine within the job that it was started via “Re-run failed tests only”? i.e. an environment variable?

1 Like

@bbrinck no official way at this time. We’ve had a couple of users leverage this workaround: Detect Rerun of a job

This is a huge feature for us! I implemented it according to Re-run failed tests only (preview) - CircleCI but it didn’t work. That page doesn’t mention closed beta. Is there a way to get into the beta?

There’s a small issue with the docs for Cypress

circleci tests glob "cypress/**/*.cy.js" | circleci tests run --command="xargs npx cypress run --reporter cypress-circleci-reporter --spec" --verbose --split-by=timings"

That closing " shouldn’t be there

Also, should the tests be comma separated as before for test splitting?

Good spot on the docs typo. Opened a PR to make that change, thank you.

The test files should be delimited by a space, that’s the default output of the circleci tests glob command. Is that what you’re asking?

If you email me your org name, I can add you to the closed beta in the next couple of business days. sebastian@circleci.com

Thanks @sebastian-lerner, emailed you.
If we don’t do comma separation for the tests we get the following warning

This will work, but it's not recommended.

If you are trying to pass multiple arguments, separate them with commas instead:
  cypress run --spec arg1,arg2,arg3

Should we just ignore this warning?

Email received, thank you.

We’ve been okay just ignoring the warnings from Cypress when we were doing internal testing. A user above has also mentioned that it works fine for them with ignoring the warnings.

I’ll look more deeply with my team to see if there’s anything we can do in the long-run to avoid that warning, but for now I think, it’ll be okay.

See this reply, with some bash magic it can be converted to comma separated list: [Product Launch] Re-run Failed Tests Only (circleci tests run) - #18 by chadchabot

Have you got any ideas brewing about how you’d use that information, @bbrinck?
If you can share those use cases, that could help inform how we’re building out the tool.

Our intention is that the workflow/job config should be pretty agnostic about whether an initial run or rerun with failed is happening.
That is, a rerun would have the same test runner and options used as the initial run, only with a subset of tests.

@villelahdenvuo that’s what we currently do but because new examples didn’t have it I was wondering if having commas is actually wrong as far as Circle is concerned.

Found two possible issues -

If you have parallelization enabled, every server will spin up even though you might have a single broken test.

If you trigger ‘rerun from failed’ on one test job it will trigger other test jobs with failed tests even if those jobs aren’t set up for it according to the docs.

@vlad for your first one, this is indeed expected behavior at this time as noted in the FAQs. We’re looking at ways to avoid spinning up additional VMs/containers, however for the vast majority of users we still see a substantial reduction in runtime & credits compared to rerunning all tests.

For the second item, can you send over an example workflow where you’re seeing this behavior? Are the other test jobs downstream in the workflow or run concurrently?