Open 1fish2 opened 5 years ago
This sounds like it would be pretty useful! It would be especially relevant to some of the current issues we're having if we profile memory usage for each of the different parts (not sure if that is part of what you meant by performance measurements or if you were mostly thinking execution time).
+1 for including memory usage! I didn't think of that.
Also, what should we measure to log Sherlock problems?
It'd be more useful to turn the library performance metrics from pass/fail tests into accumulating daily measurements, and also accumulate performance measurements from Parca, 1 simulation generation, and the analysis plots.
Then chart the data. This would show optimization progress as well as unintentional slowdowns and Sherlock hardware slowdowns.
Some modeling improvements will slow things down. That's progress, too. A way to manually add annotations to the data would help with understanding the charts.