our angular 7 builds randomly fails with “Error: spawn ENOMEM”
It happened a while back (every build), and our solution at the time was to increase memory with node --max_old_space_size=4096, which worked great, until now.
recently, it starts to fail again at random, with the same error. and a rebuild (without any code changes) fixes the problem.
i’ve tried to follow the recommendations listed on Caching Dependencies, but it doesn’t solve the issue.
@davidsonfellipe thank you for the suggestion. unfortunately, we are using angular cli.
though your suggestion gave me an insight to look into our build settings. since we only ever experience this memory issue on dev (where source-map is enabled), i’ve decided to turn-off “buildOptimizer”.
So far, this seems to be working - will update if it begins to fail again though.
There is a jest flag that controls the number of processes spawned during testing. I changed my test command in config.yml from yarn test to yarn test --maxWorkers=2 and the process bottleneck disappeared.
hey @jenCape@davidsonfellipe I have been having the same issue since we switched to using angular cli 8 with optimization: true and buildOptimizer:true in the angular.json. We are using circleci node:12.4 image. Any suggestions on how to get this fixed with disabling the optimisations? Thanks
i turned off buildOptimizer for dev environment, which resolves the issue
as for staging & prod, i disabled sourceMap for scss (which mitigates the problem slightly, but not really solving it).
problem with my build is source map files are gigantic, and i’m planning on using https://www.npmjs.com/package/webpack-bundle-analyzer to optimize the imports
thanks for your reply @jenCape… while we do not need sourceMaps in production builds we do need them in other environments. and the build seems to run fine on my local machine and other devs. Its just the circleci running into issues. So rather than tweaking our builds, shouldn’t there be some solution at the circleci level? docker image?
i read somewhere on circleci documentation that it does not allow extra memory allocation, something we could do locally. not sure if that changes things (my build, for instance, with source code enabled, is quiet large).
it could potentially be something on circleci’s end, except i’m not able to get anyone from circleci to help with this situation, so i’d rather find alternative solutions to unblock my builds.
Hi @jenCape. Upgrading the resource_class to large for the particular job in config.yml solved the issue for me. Also I switched to using node 10 for my circleci image, apparently I saw a bug filed for node 12 about 30% more memory consumption. Hope this helps. Thanks
Hi @jeetparikh looks like your solution makes sense to me, but resource_class extend needs extra effort to create support ticket and so on. is that the only solution for this? My build 100% fails…
Hi @webcat12345 unfortunately this is the only solution I got it working. This is because when you turn on optimisation and buildOptmisation to true in angular.json it is consuming a lot of memory ( you could check on your local device when running a build) and thus increating the resource_class to provide more memory is the only solution I could get working… If you turn those 2 features off then it runs with a low memory but I don’t think that is the preferred solution here… Hope this helps… thanks…
Hi, @jeetparikh I got you, our dev team is trying to have an allocation for increasing Circle CI images. That’s really helpful for us to report things correctly and provide a detailed report to managers. Something strange is we are facing this issue recently never happened before… so it was killing us so unexpected and we gonna have presentation very soon. Thanks for your help.
We started hitting these issues for the past 3 weeks. It appears to be significant decrease in allocated CPU efficiency. Why wasn’t this change broadcasted properly?