Memory use in CCI among different containers

Hello,
Recently some of our CCI jobs started failing with errors like, “CircleCI was unable to execute the job to completion, because the CircleCI runner received a kill signal midway through the job.”

I increased the resource_class of the Docker containers building the jobs that were failing and things seemed to be good. But now other jobs are also sometimes failing and I’m a bit puzzled because they’re jobs that build or deploy things that haven’t changed in a while, and they shouldn’t be using anywhere near as much memory as the jobs that were originally failing, which use a gigantic cached tarball of dependencies shared among several jobs.

My understanding of the way that CCI works is that each job runs in its own Docker image with its own amount of memory allocated, and that the different jobs running at the same time in different containers can’t interfere with one another and cause each other to run out of memory, but the errors make it seem as if that is what is happening. Can you confirm how this works?

Here is an example of a job that just failed. One of the other jobs in that workflow installs a bunch of dependencies and caches them with CCI as a tarball, which some other jobs (not the one which failed, though) then decompress and use. Here is the job that created the tarball.