Open ronag opened 1 year ago
Was discussed on TSC meeting and we would love to have a PR to discuss.
In my experience, this can lead to reduced memory consumption for most server-side workloads because fewer objects will be moved to old space.
I'll like to take a look at this. Does anybody have any extra knowledge on this, in case some help is needed?
Would setting the option value in node.cc
like this be the way to go? or there a better place to put this
diff --git a/src/node.cc b/src/node.cc
index 2aba7333d9..e1e0a6b924 100644
--- a/src/node.cc
+++ b/src/node.cc
@@ -779,6 +779,8 @@ static ExitCode InitializeNodeWithArgsInternal(
// is security relevant, for Node it's less important.
V8::SetFlagsFromString("--no-freeze-flags-after-init");
+ V8::SetFlagsFromString("--max_semi_space_size=128");
+
#if defined(NODE_V8_OPTIONS)
// Should come before the call to V8::SetFlagsFromCommandLine()
// so the user can disable a flag --foo at run-time by passing
Let's maybe start with 64mb as it indicates less peak memory increase while still showing a significant performance improvement overall?
sure, ran the tests again seems like 64 mb is the sweet spot actually
for 128mb
debadreechatterjee@Debadree-TagMango-MacBook-Pro web-tooling-benchmark % /Users/debadreechatterjee/Documents/Personal/node/node-perf dist/cli.js
Running Web Tooling Benchmark v0.5.3…
-------------------------------------
acorn: 14.11 runs/s
babel: 10.97 runs/s
babel-minify: 16.76 runs/s
babylon: 20.19 runs/s
buble: 6.60 runs/s
chai: 18.82 runs/s
coffeescript: 10.61 runs/s
espree: 5.38 runs/s
esprima: 16.88 runs/s
jshint: 15.18 runs/s
lebab: 15.31 runs/s
postcss: 9.26 runs/s
prepack: 9.91 runs/s
prettier: 10.38 runs/s
source-map: 10.87 runs/s
terser: 29.18 runs/s
typescript: 15.18 runs/s
uglify-js: 7.73 runs/s
-------------------------------------
Geometric mean: 12.45 runs/s
and for 64mb
debadreechatterjee@Debadree-TagMango-MacBook-Pro web-tooling-benchmark % /Users/debadreechatterjee/Documents/Personal/node/node-perf --max-semi-space-size=64 dist/cli.js
Running Web Tooling Benchmark v0.5.3…
-------------------------------------
acorn: 16.04 runs/s
babel: 11.32 runs/s
babel-minify: 16.83 runs/s
babylon: 18.43 runs/s
buble: 7.23 runs/s
chai: 19.44 runs/s
coffeescript: 11.17 runs/s
espree: 5.46 runs/s
esprima: 19.62 runs/s
jshint: 17.77 runs/s
lebab: 16.95 runs/s
postcss: 8.80 runs/s
prepack: 11.63 runs/s
prettier: 10.41 runs/s
source-map: 11.36 runs/s
terser: 29.36 runs/s
typescript: 14.97 runs/s
uglify-js: 8.74 runs/s
-------------------------------------
Geometric mean: 13.12 runs/s
Just needed to know if that was the right place to put this change since am new to the cpp part of the codebase 😅
I'm not sure that is safe to do:
I was trying to use --max-semi-space-size=64
for one of the services. After ~ week it (GC?) just blowup during high traffic:
As we can see, at some point v8 started requesting 128MiB (rather than 64MiB) of memory for the "new" stuff, while not being able to handle and use it.
Processes started to use multiple cpus/cores (3 cores at 100% each):
The funny fact is that these services were still able to handl traffic without any issues.
I think presently the flag --max-semi-space-size=64
just directly sets semi-space to 64mb I think from the basic stuff I understood from reading that part of v8 code we need to change the partition between young gen and old gen space (and semi-space as a proportion of young space, directly setting it causes OOM error, in the PR also https://github.com/nodejs/node/pull/47277 the CI failures are all OOM related tests, (although i havent yet fully investigated all the nitty grities here and the pr is still pending 😅, feel free to correct me if i said something wrong)
The node docs already explicitly mention setting max-old-space-size, and this is essential for efficient use of memory in single-process server applications. The reason this setting isn't baked in is because of the qualifications, such as single-process, and because YMMV.
IRC, max-semi-space-size will be derived from max-old-space-size.
If anything is done wrt max-semi-space-size, I suggest first adding a recommendation to the docs to accompany (and complement) the max-old-space-size recommendation.
In the version of v8 (11.3) used in Node 20, the default max-semi-space-size
is no longer a constant (16MB),
and instead scales with the memory limit.
In addition to the documentation now being incorrect:
https://github.com/nodejs/node/issues/55487
This can also have an undesirable performance impact when upgrading from Node 18 to Node 20:
In Node 20, max-semi-space-size
depends on the memory limit and its default size will be less than 16MB for memory limits up to 2GB, inclusive.
For example, a memory limit of 512 MB will default to 1 MB of semi space in v20, whereas in v18 the default for 64-bit systems is 16 MB.
Without an explicit semi-space-size in node:20, the young generation in this case is only 3MB
docker run -it -m 512m \
-e NODE_OPTIONS="--max-old-space-size=384" \
node:20 node -e 'console.log(v8.getHeapStatistics().heap_size_limit/(1024*1024))'
387
387-384 = 3MB = 3 x max-semi-space-size
(1MB)
Configuring semi-space-size, below, for same young generation size (48MB) as node:18
docker run -it -m 512m \
-e NODE_OPTIONS="--max-old-space-size=384 --max-semi-space-size=16" \
node:20 node -e 'console.log(v8.getHeapStatistics().heap_size_limit/(1024*1024))'
432
432-384 = 48MB = 3 x max-semi-space-size
(16MB)
See, https://github.com/nodejs/node/issues/42511.