Open lud-wj opened 1 year ago
:wave:
Thanks for reporting an issue! :green_heart:
Sorry for the long silence!
Also, yes that's likely - they run so fast that so many measurements are generated in memory that it breaks depending on how much RAM you got.
Doesn't mean we can't be better (this has happened before), we could offer something like max_measurements
to avoid and short cut these kinda problems :thinking:
I'll run it later just to make sure. Ok ran it now. From the looks yeah it's definitely just too many samples:
The remedy right now is to just reduce the time:
inputs = %{
"large" => Map.new(1..33, fn n -> {:"key_#{n}", []} end),
"single" => %{key_1: []}
}
Benchee.run(
%{
"body_match" => fn map ->
%{key_1: value} = map
value
end,
"case" => fn map ->
case map do
%{key_1: value} -> value
end
end,
"dot" => fn map ->
map.key_1
end,
"head_match" => fn
%{key_1: value} ->
value
end
},
inputs: inputs,
pre_check: true,
time: 1,
warmup: 0.1,
formatters: [{Benchee.Formatters.Console, extended_statistics: true}]
)
Which gets me this output:
You can see that's 2.22M samples per scenario and 8 scenarios oh my. The time it actually blows up is during the statistics calculation I think (with the longer times) and part of that is that instead of implementing the super hard way to get some stats the list is actually sorted :sweat_smile: So, could also try make that better although benchmarks say that for normal cases it's fine.
Again, thanks for the report!
Hey, thanks for the confirmation!
max_measurements
could be nice but really my benchmark does not do any actual work, I'm not sure that would be a really useful feature :)
I mean it has happened before and you ran into an issue because of it - so I think benchee as a tool can do something better and so I'll see if I can easily implement it which should be possible. Might need to start tracking the measurement count but that shouldn't be tooo bad.
Alright! I don't know the benchee code but indeed it should be simple enough :)
Looking at the code, part of the problem is probably also that we calculate all the statistics in parallel for each scenario :joy:
def statistics(suite) do
percentiles = suite.configuration.percentiles
update_in(suite.scenarios, fn scenarios ->
Parallel.map(scenarios, fn scenario ->
calculate_scenario_statistics(scenario, percentiles)
end)
end)
end
Which... for a normal number of scenarios this is probably fine but overall not the greatest and may be good to optionally turn off :joy:
Same thing for formatters :sweat-smile:
If there is a max_concurrency option or something like that it could be nice to expose it indeed.
I had a minor hope that #408 would maybe put a dent into this one, but it doesn't which makes sense as the inputs used here are relatively small. That said I haven't forgotten about this one, but the other ones have been more prevalent :sweat_smile:
Hello,
I see when running the following code that input"single
with case"head_match"
will consume all my RAM, all my swap, and will get OOM killed.Same case with input"large"
is fine, as well as all other cases with any input.It seems that the problem always arises after the last input/scenario, not matter which one I comment.
I have no idea why.
Maybe a bug?
Edit: My suspicion is that is it such a stupid and useless benchmark that the functions just runs too many times and generate too much measurements. In which case just close the issue <3