JVM not picking up cgroups memory limit

As far as I can tell, the cgroups setting isn’t being made available to the JVM correctly. See this build.This results in the JVM being killed with 137 error rather than OutOfMemoryError (in some other private builds).

The build shows a docker image - circleci/openjdk:11.0.2-jdk - being used that then calls on to Maven. Maven is started using MAVEN_OPTS: -XX:+PrintFlagsFinal to show what Java is seeing. As shown, the MaxRAM setting is 137Gb, not 4Gb showing that Java can’t see the memory from cgroups. But cat on the cgroups file shows an even larger number.

I don’t think I’m doing anything wrong based on my reading of multiple blog posts that indicate Java should just pickup the memory limit now (and the logging shows that UseContainerSupport is true but MaxRAM is not ergonomic). I’m not sure where the problem is, and whether the cat value is significant of not.

As it stands, I have to use MAVEN_OPTS=-XX:MaxRAM=4g to ensure that the JVM is correctly informed of the enforced memory limit.

Am I missing something obvious?

This is correct, you can’t use cgroups to detect memory. This blog post explains how all the different memory limit options work on CircleCI so you can determine which one is best to use https://circleci.com/blog/how-to-handle-java-oom-errors/

And on that page it says:

Additionally, CircleCI runs on virtual machines with lots of RAM, using cgroups to allocate a slice of the pie to each individual build

There’s help on the horizon, though: Java has a new(ish) ability to read the cgroup memory limits of your build’s Docker container, rather than (mis)reading the total memory of the entire machine. These new options should make it easier to get the JVM to use “most” of the memory on the machine, without going over.

So, surely this is a bug? The page indicates you use cgroups, and the JVM now reads cgroups, so what is going wrong?

FWIW, that page is also inaccurate. Unless you set -XX:MaxRAM, Java will still get 137 errors. Maven forks the JVM when it runs tests, and controlling the memory usage of each fork isn’t easy. I’ve tested it such that:

  • MAVEN_OPTS=-XX:MaxRAM=3572m -Xmx1g and surefire fork argLine of -XX:MaxRAM=3572m -Xmx8g works
  • MAVEN_OPTS=-XX:MaxRAM=3572m -Xmx1g and surefire fork argLine of -Xmx8g gets 137 error

Thus, I conclude that the -XX:MaxRAM=3572m isn’t optional, its essential (the MaxRAM causes the JVM to understand that the 8g can’t actually be allocated). So your page should document that.

But what is CircleCI doing to stop cgroups from working with the JVM?

We aren’t doing anything special that I am aware of. The post is a bit old and I had forgotten it has been migrated to our Docs at https://circleci.com/docs/2.0/java-oom/

If you are still finding that incorrect can you please open a PR or Issue on that page so we can investigate making it correct (either bug fix or docs change)?

It appears as if the issue might be in Nomad not mounting the cgroups filesystem needed: enhancement: mount cgroups filesystem in exec and java driver · Issue #5376 · hashicorp/nomad · GitHub.

The resources stanza does have an effect, it’s just that the Java runtime isn’t aware of it. We don’t mount the necessary cgroups for the new Java container functionality to work. I’ll file an issue on our roadmap to look into this, and I’ll leave this ticket open for updates.

I opened Clarify that Java's UseContainerSupport doesn't currently work on CircleCI · Issue #3938 · circleci/circleci-docs · GitHub to document this limitation in CircleCI’s docs.