Open yifanwww opened 3 hours ago
Comparing the timing of an algo implemented in different langages with the same measurement tool just tells you which language runs it faster. Comparing the timing of an algo implemented in different langages with different measurement tools just tell you ... absolutely nothing. The measurement methodology is completely wrong.
And measuring the overhead of timestamping a block execution time in an interpreted language has nothing to do with the measurement of executing noop: in your example the latency median of the noop is zero with a zero median absolute deviation. Furthermore, a benchmarking tool for an interpreted language that is not including the interpreter overhead in its measurement is just meaningless.
The measurement methodology used in BenchmarkDotNet is utterly wrong: https://github.com/tinylibs/tinybench/issues/143#issuecomment-2424232480
Before we go any further i have a question:
What's the difference between "bench 1" and "bench 2", is "bench 2" correct? or is there a way to benchmark a super fast code by tinybench?
function fn() {
// a small fn that only runs for a few nanoseconds
}
function bigFn() {
// a big fn that runs for a few milliseconds
for (let i = 0; i < 100_000_000; i ++) {
fn()
}
}
const bench = new Bench({ time: 500 });
bench
.add('bench 1', () => fn())
.add('bench 2', () => bigFn());
await bench.run();
It's two different experiments that have nothing in common. I'm not going to explain again here what I have already explained in the link given. Please read it.
Hi thank you for creating such an amazing benchmarking tool! However the benchmarking result is not exactly what I want.
I have read these issues
I think i'm requesting another feature so i write this issue.
For example we write this code to benchmark:
and the result is
Refer to this example to reproduce the result.
Hmm I don't think this simple fibonacci algorithm would take that long to run. Or even a noop function takes 44 ns to run. A noop function should take zero time, or less than 1 ns due to the direct function call if it's not inlined.
Let's assume the tinybench overhead is 44.29 ns. By excluding 44.29 ns the benchmarking results will be:
I cannot say this results are correct because I don't know if we can just consider the noop benchmark result as tinybench overhead. But at least it shows how we can get close to the correct result.
I tried other benchmarking tools and here're the benchmarking results:
Those results are significantly different from the tinybench results.
If we look into the BenchmarkDotNet logs, we will see "OverheadActual", "WorkloadActual", for example:
If we subtract them we can get
WorkloadActual - OverheadActual
= 8.0104 ns/op, it's pretty close to the average result 8.0300 ns/op.What BenchmarkDotNet actually does is slightly different from that. You can read How it works. It says BenchmarkDotNet gets the result by calcualting
Result = ActualWorkload - <MedianOverhead>
.