Open Kosta-Github opened 1 month ago
It's JSON.parse
source text access polyfill.
Sure, we can't implement it in JS as optimized as it can be done in JS engines natively.
If you have some proposals how to optimize this polyfill - feel free to open a PR.
If performance is critical for you - you could update your Node, it's available natively -> polyfill not installed from Node 21, or just exclude this module from your app if you don't use JSON.parse
source.
reviver
functionreviver
function and context.source
supportreviver
function than without? there needs to be some additional state stored somewhere on the objects that is contributing to the increased memory usageit is not clear to my why the generated object hierarchies should consume more memory when using the trivial reviver function than without?
Because in your case without reviver
is used native JSON.parse
, not a polyfill.
Sure, that is obvious.
The question is, why would the object tree generated by the polyfilled JSON.parse()
allocate more/additional memory when used with the reviver
function?
I am not concerned about potential additional memory usage during the parse
operation, but about the additional memory usage that is kept alive and associated with the returned object hierarchy after the parse
operation.
Say, you are parsing this JSON { "hello": "world" }
with and without the trivial reviver
function. Why should the result consume more memory when the reviver
function was used?
They have the same tree. Why do you think that's not? One more time - when you call JSON.parse
with reviver
, used polyfilled method, without - native.
They have the same tree. Why do you think that's not?
Because the memory consumption is higher if that tree was generated with the polyfilled parse()
function.
when you call JSON.parse with reviver, used polyfilled method, without - native.
Again, I get that.
This does not explain, why the generated tree
consumes more memory.
I am not talking about the memory consumption during the parsing
.
Something like:
mem_used_by_object(polyfilled.parse(json)) >= 4 * mem_used_by_object(native.parse(json))
Because the native JSON.parse
is more optimized (including memory) than the polyfill? -) They have different representations of this tree in memory, most likely the ways of garbage collection, etc.
If you want it, you could dig into it and try to optimize it. For example, Context#source
is the same string on all instances and theoretically should be optimized by modern engines and refer to one place in memory - but something could be wrong. Or regexes usage, which also is not free. Etc. However, some specific features, like descriptors edge cases, are almost impossible to optimize because of the JS nature.
V8 JSON
parser is a low-level C++ tool, it's strange to ask why JS implementation of this takes more memory.
If you talk about result objects, not about the JSON three, I see only 2 answers: how GC works and descriptors usage -> result objects representation in memory, but that's required for the proper result. In both cases, I don't see how it can be optimized on the core-js
side.
I played with your example with --expose-gc
flag and manual GC handling. Even in this case polyfilled method result object takes more memory than native. As an option, it can happen because the result array can be non-optimized.
In Node where this feature is available natively, also is a difference in memory usage between cases with reviver
and without - however, not so significant.
As I wrote, it's not a bug - it's an issue of optimization for specific engines. As I wrote, if it's interesting for you, feel free to play with internal representations of objects in V8 and open a PR with optimization of this case.
When using
esnext.json.parse
with areviver
function the deserialized objects are heavier (memory-wise) than the ones that are generated by the non-polyfilledJSON.parse()
function.How to reproduce:
Let the above script run for a while and observe the memory usage and delta when using unmodified
JSON.parse()
, which is something like:When uncomment the first line and using the polyfilled
JSON.parse()
function the output looks like this:You can see that the memory usage is up to 4-5 times larger and growing way quicker.