Problem running selenium tests on selenium/standalone-chrome docker container

Hi,

I’m trying to run a job with two docker images. The first docker image runs my test code and the second docker image runs the standalone selenium server with chrome and chromedriver installed from SeleniumHQ GitHub - SeleniumHQ/docker-selenium: Provides a simple way to run Selenium Grid with Chrome, Firefox, and Edge using Docker, making it easier to perform browser automation

When I run this locally with the CircleCI Local CLI tool ( circleci local execute --job my_job_name ), things work great and my tests run and pass. BUT when I try to run it on CircleCI, I’m running into a problem.

My job looks something like this:

my_job_name:
    docker:
      - image: my_ubuntu_based_docker_image
      - image: selenium/standalone-chrome:3.141.59-palladium
        name: selenium
    steps:
     - run: <run python selenium tests that talk to port 4444 running on the standalone selenium server image>

What happens is that the standalone selenium server runs and I can see from its console output that it receives the connection from the python selenium tests, but then after a bit the output in the standalone selenium console log stops with the last line saying “Job was canceled”.

The first docker image which runs the tests shows in its console error dump:

raise RemoteDisconnected(“Remote end closed connection without”
urllib3.exceptions.ProtocolError: (‘Connection aborted.’, RemoteDisconnected(‘Remote end closed connection without response’,))"

– which would appear to indicate that the standalone selenium server docker container crashes or loses its network connection.

Has anyone seen this before or have experience getting the selenium/standalone-chrome docker image to work in CircleCI?

I got this info from CircleCI Support:

it appears you have run out of memory! Our default medium container provides you with 2CPU and 4096MB of RAM.

The second container running on this build is known as a Service Container. Service containers do not get their own resources and share 2 CPU and 4096MB of RAM with the parent container.

Our suggestion would be to upgrade to the performance pricing plan to gain access to the resource_class feature, where you will be able to define a larger machine to run this build on. This build was canceled at 4.9GB of RAM, I believe the medium+ resource class will give the resources needed to successfully run this build.

In case anyone else runs into this problem.

On an unrelated note, I do like your avatar. Judging by the rollneck jumper, I think that might be Steve Dogs :rofl:

Incidentally, if you want a RAM upgrade in the short term, using a Machine executor is free, and that gives you 8G. However, you lose the flexibility of the Docker executor, so a paid RAM upgrade for that can be a better solution.