Closed mcollina closed 2 years ago
What about this one? https://github.com/epoberezkin/fast-json-stable-stringify
See the benchmarks at https://github.com/BridgeAR/safe-stable-stringify#performance--benchmarks
@mcollina any way you can share the analysis or provide instructions to repro?
safe-stable-stringify
seems pretty fast compared to other stringify options and v8 serialization. We could try a different approach, e.g. cache array lengths and lists of top-level object keys as well. This might help detect cache misses more quickly before stringifying giant nested objects. I'm not sure this addresses a cause of the bottleneck you mentioned though.
Essentially those are the kind of logic we might want to consider.
Just run a flamegraph against https://github.com/mercurius-js/cache/tree/main/bench and you'll see the bottleneck.
another option would be to modify the serialize()
function so it constructs/returns a string instead of an object that's later stringified. Not sure if this is feasible though.
@mcollina does the following screenshot corroborate your findings?
Yes, exactly. It gets worse as the parameters size grows.
I've tried using yieldable-json
and worker threads with pinscina
for stringification. Both approaches under-perform safe-stable-stringify
. Unless there are other benchmarks indicating that safe-stable-stringify
is a serious performance concern, perhaps it makes sense to optimize mercurius cache serialization instead?
Also, mercurius-js/cache
is still using v0.5.0 of async-cache-dedupe
. Perhaps we can revisit when it has the newer code?
Also, mercurius-js/cache is still using v0.5.0 of async-cache-dedupe. Perhaps we can revisit when it has the newer code?
It has been updated this week.
I'll close this for now./
A quick analysis with a flamegraph shows that
safe-stable-stringify
is the major bottleneck for this cache.We should investigate if we could come up with a faster algorithm for hashing objects but with the same properties.