X86 VM CPU/RAM size mismatch

Our build system automatically detects the number of CPUs and RAM size then decides on the number of threads. It seems that there’s a mismatch between what our scripts detect and the VM instance size shown in the status page, we are using the docker executor for x86_64 builds
so choosing:

  • x86 xlarge, the script detects 36 CPU cores/ 68GB RAM
  • x86 large shows the same, 36CPU/68G RAM
  • arm64 xlarge shows 8CPU/30G RAM (matches advertised CPU/RAM)

we detect the CPU count and RAM size using simple bash commands

NCPU=`nproc --all`
RAM=$(free -g | grep Mem: | awk '{print $2}')

if I use ssh build and login (say on x86 large instance), I use top command to view resources and they match the numbers returned by the script, a lot more than what they’re supposed to be
I don’t know if these are shared resources with other VMs, but it seems that the actual available resources are less causing some builds to run out of memory and fail

did anyone face a similar problem?

1 Like

running top when ssh into an xlarge x86_64 VM

Here’s the reply I got from CircleCI support

Thank you for contacting CircleCI Support!

Currently, utilizing our docker executor, calling nproc along with any other command line based CPU/RAM command, will return the shared value of the actual host for the executor. This is by design so that we can control “peaks”. Lets say you are utilizing a medium resource class with 2 CPU Cores and 4GB of RAM, this would default to a max CPU usage of 200% (100% for each core). We allow a “peak” to let’s say, 250%. So if your CPU usage peaks at 250%, but falls back below the 200% “max” it will not fail and continue building.

For a more local-like structure, you could utilize our machine executor, which, when running nproc, would result in the resource class’ configured values.

1 Like