[Product Launch] Smarter Testing is now in Beta

Smarter Testing is now in Beta

We’re excited to announce that Smarter Testing is now available in beta for all Cloud customers. This includes test impact analysis, dynamic test splitting, and automatic rerun of failed tests — all designed to help you run fewer tests, run them faster, and get signal sooner, while maintaining full confidence.

What beta means

Beta means the product is in early stages and you may encounter bugs, unexpected behaviour, or incomplete features. When the feature is made generally available, there will be a cost associated with access and usage.

Getting started

Head over to our Smarter Testing documentation to learn how it works and how to set it up for your test suites.

What to expect

  • The documentation should have you up and running. That said, if you run into setup or configuration challenges, we recommend holding off until our new CLI-driven onboarding tool ships — it validates your configuration and walks you through any needed fixes. We expect it to land in the coming weeks.
  • If you hit bugs, or notice unexpected behaviour, let us know by dropping your questions/concerns right here in this thread— your feedback directly shapes what Smarter Testing looks like at GA.
  • Have ideas or feature requests? Submit them on our Ideas board where you can also see existing feature requests and vote on them.

Is support for Ruby/RSpec/SimpleCov there or coming soon? Thank you!

Is it true that it only compares with the default branch? Would it compare with a commit that has a tag and is on the default branch? For most of our tests, we don’t run them directly on the default branch. And testing is tough because it’s doesn’t seem to care about tests run previously on a particular feature branch.

You can set it up with any test runner as long as it outputs coverage data, so SimpleCov works well here. Additionally, we offer built-in coverage support for certain test runners, and RSpec is one of those supported options.

Test Impact Analysis compares your changes against the impact map generated during the analysis phase. For most teams, this map is built from the default branch.

When you push to another code change, we compare the changes in that branch against the existing impact map and only run the necessary tests.

So in practice:

  • It doesn’t strictly “compare branches” directly

  • It compares your changes to the baseline map (usually from the default branch)

You can read more details on how this works here:
https://circleci.com/docs/guides/test/set-up-test-impact-analysis/#how-it-works

1 Like

I think there are 2 things to consider. One is the impact map for deciding whether a test needs to run, and the other is the set of changes to check against the impact map. I don’t really get where it is getting either. of those in our environment. Is there a way I can have visibility into that? If I run the tests on a tag that is on a commit and then that commit is in the history of a feature branch I am working on, will it use the impact map from that run and the changes from that commit?

1. The impact map and visibility

The impact map is built during analysis runs. During analysis, each test runs with coverage instrumentation, and we record which source files each test touches along with the content hashes of those files. You can inspect the impact data locally by running analysis with a local run — see this section of the docs.

Set up test impact analysis - CircleCI Docs - This section of the docs also shows an example of what the impact data looks like.

2. How change detection works

When using test selection, we hash your current source files and compare those hashes against the hashes stored in the impact map — it’s purely a content-level comparison against the latest impact map. Smarter Testing does not compare the contents of commits or tags, it compares the checked out file system with the impact data from the last time analysis was run.

1 Like

OK. That seems very good. It’s not about the git diffs. It’s about the file hashes since the impact analysis was run. I think we can manage that effectively. Thank you very much! My mind was too connected to git. You all do such great work!

2 Likes

Awesome! Glad we could help clarify :slight_smile:

Been testing this, with good results so far. Really interested in some of the other things that I suspect this will unlock down the road (but also curious / nervous to see how the pricing model will be).

I know it’s possible to use mustache templating to do different parallel numbers on the default branch vs. other branches, but one thing that I do think customers with high parallelism (at the CircleCI test splitting level) will really benefit from would be some way to do parallelism a bit more dynamically depending on the number of changes… I can see a scenario where most of our branches have relatively few tests to run, but there will also be scenarios (where many files are changed, or where we’ve changed one of the files that the suite says should trigger a full run) where setting lower parallelism on PR branches would then take forever [with the reduced parallelism].

Thanks for the detailed feedback—this is a great point. We’re not building dynamic parallelism directly, but we do have related work in our backlog that would help enable use cases like this. Your example is really helpful context as well.