Open giggio opened 8 years ago
I've been attempting to get this working, but all of my standard approaches to controlled benchmarking are producing unreliable results. I'm leaning towards running a large number of passes using the same analyzer, throwing out a fixed number of outliers (high and low), and averaging the results. However, the number of passes required to get the confidence interval small enough for meaningful results is large, so it takes nearly 2 hours to calculate the values for just our own relatively small solution. I'm concerned that we would additionally need to test other projects (#16) before we're sure that the numbers accurately reflect the expected real-world performance.
This continues to be an issue. I just got an issue to look into that, and we don't even know where to start... https://github.com/code-cracker/code-cracker/issues/766
any update on this?
Some ideas:
slow
,regular
,fast
analyzers?