Open dblock opened 4 years ago
Good question, @dblock. I think that makes sense to me. Apologies for the delay in response!
I can try to make a PR, if you want.
Don't let me stop by @AlexWayfer :)
I'm interesting in fresh dependencies, so I want to wait for #11 and #10.
Now #14 (and merged #16 with #17 would be good too).
Alright, digging in here. If I am understanding the intent of the benchmark, this is a clearer way to write it:
Running this gives the following output:
If I'm understanding the source of the question, you were finding this confusing because the report is about the magnitude of allocations rather than the magnitude of retentions. In that case, I don't think that calling GC.start
would do what you expect because it's already done as part of the measurement.
I think what I would like to do to address this is twofold:
(1) would make the output less confusing by default and (2) would allow you to tune the tool to be better suited for testing memory leaks.
What do you think? Would those two changes have made the situation less confusing?
I'm reviewing the original report and a bit confused:
Yeah, @setup << { x: 1 }
with @setup
as a Set
will prevent duplicates inside itself, but there are { x: 1 }
initializations anyway, and they're all in a report, so… objects created anyway. And now, while I'm writing it, I see 1.680M retained
vs 168.000 retained
, and yes, having their comparison under Comparison:
"header" can make things more understandable. Also maybe renaming or a hint like "retained after garbage collection".
Coming from https://github.com/michaelherold/benchmark-memory/issues/9. Output was confusing without
GC.start
.