CodSpeedHQ / codspeed-node

Node.js libraries to create CodSpeed benchmarks
https://codspeed.io
Apache License 2.0
11 stars 2 forks source link

pref: Improved implementation of tinybench #27

Closed Dunqing closed 10 months ago

Dunqing commented 10 months ago

I think here do not need to run optimizations.

adriencaccia commented 10 months ago

Hey @Dunqing, glad to see you are interested in CodSpeed!

We run optimizations to have a more consistent result in the benchmarks. Removing them will result in flakier benchmarks, which we do not want. Why do you think we do not need them? Maybe there is something about your use case?

Another thing, you removed the use of __codspeed_root_frame__, which is essential for profiling to work. So we want to keep it.

Dunqing commented 10 months ago

We run optimizations to have a more consistent result in the benchmarks. Removing them will result in flakier benchmarks, which we do not want. Why do you think we do not need them? Maybe there is something about your use case?

In vitest, the warm function is executed by default. warm is similar to optimizedFunction. In Tinybench, warm is not executed by default. But usually the user writing the test will run warm first. Therefore, I don't think we need to run optimization, it should be up to the user to decide.

Another thing, you removed the use of __codspeed_root_frame__, which is essential for profiling to work. So we want to keep it.

Ok, I will keep it

adriencaccia commented 10 months ago

In vitest, the warm function is executed by default. warm is similar to optimizedFunction. In Tinybench, warm is not executed by default. But usually the user writing the test will run warm first. Therefore, I don't think we need to run optimization, it should be up to the user to decide.

I agree with you, we do not need to run the optimization; an expert user could decide to run it. But at the moment we think that running the optimization is a more sensible default than not running it.

What I would propose is having a config parameter runOptimizations = true in the withCodSpeed function, with a JSDoc comment that states that setting it to false can result in flakier benchmarks. Depending on this parameter, we would run or not the optimization.

But before doing that, it would help to know your use case for not wanting to have the benchmarks warmed. Because adding this parameter could cause really flaky and possibly unusable metrics. We can use our discord to discuss that further if you'd prefer not do that here.

Dunqing commented 10 months ago

What I would propose is having a config parameter runOptimizations = true in the withCodSpeed function, with a JSDoc comment that states that setting it to false can result in flakier benchmarks. Depending on this parameter, we would run or not the optimization.

But before doing that, it would help to know your use case for not wanting to have the benchmarks warmed. Because adding this parameter could cause really flaky and possibly unusable metrics. We can use our discord to discuss that further if you'd prefer not do that here.

I just don't think running optimization is always necessary, and if you want to keep it for now, that's fine with me!

Dunqing commented 10 months ago

Sorry, I just realized that the original processing was correct and that this PR makes no sense anymore I will close it.