Flaky Selenium tests


Hello everyone. I’d like to know how to configure machine resources on a per-build basis as mentioned above. Where can I find instructions to achieve this?


Hi @pedrovictor

Can you please explain a bit more on what is it that you are trying to achieve with increased resources?

We have made several improvements in 2.0 and we might be able to help out with your requirements in an alternative way.

Configure Machine Resources is currently in limited testing and should be generally available fairly soon.


Sure. We are trying to achieve a better build process stabilization since our test suite includes several Selenium tests and they intermittently fail. Our investigation of the problem identified that it was caused mainly by slow build machine performance.
It isn’t a matter of build time (we don’t mind having our builds complete in the 40-45min range as it currently does) but clicking on Rebuild every once in a while is something we would really like to take out from our build process.


Thanks @pedrovictor.

2.0 has been designed from ground up and should have performance improvements for most scenario’s. I would suggest that you give it a try.

We do have features under development that will solve your clicking rebuild case. Stay tuned :slight_smile:


Thank you for describing your use case, Pedro. Intermittent failures in Selenium come up for many customers on our 1.0 platform as well. In my past consulting experience–I’ve also seen them frequently on other CI Saas providers and self-hosted CI.

These bugs won’t be eliminated by moving to faster hardware, but they may be reduced by doing so. The root of this flaky behavior is race conditions. A race condition in code is still a race condition on faster hardware.

There are a couple of reasons why you see this more on CI than on local:

  • CI runs your full test suite 100-10000x times more than any dev machine does; it’s more likely to be visible wherever you run your tests most
  • CI runs on a specific OS and CPU architecture that may differ from dev machines

I know flaky integration tests are painful and feel like a waste of time. I’m taking the time to write this because this comes up with many, many customers. If you have the time, it’s worth researching and understanding the true cause. As an example, the presence of another gem with a C extension can cause intermittent errors with Capybara-webkit.

There are many such issues that come up when searching for ‘intermittent’ in capybara-webkit issues and Selenium issues. When you run into one, your best option for seeing a true fix is to isolate and reproduce the issue, and share those results with the test driver maintainer. We’re happy to help you look for this kind of information.

If you don’t have the time for that–that’s totally reasonable. I also use a lot more open source than I have time to contribute to. I just want to repeat that for intermittent integration test failures, a bug is a still a bug on faster hardware*.

* (There are exceptions to this, like if you’re making hardware specific code for pacemakers, car firmware, etc)