Closed plokhotnyuk closed 6 years ago
I'm thinking of doing this one but I don't know which chart to use and how it would be displayed in the repository for reference.
I'm guessing it could be generated via call from a CI then it would be embedded in the README.md
section of the repository.
IMHO results for current version of benchmarks should be displayed using a horizontal bar chart with error bars and title of benchmark method name on the left side like on this image https://peltiertech.com/images/2010-12/TextLabel2007Bar08.png
But it would be better to revise benchmarks to be parametrized by at least one parameter (size of array, length of string or some back-off parameter when measuring latency instead of throughput, etc.).
BTW: In this article from the author of JMH you can see great examples of charts for parametrized benchmarks https://shipilev.net/blog/2014/nanotrusting-nanotime/
Currently Travis CI uses AWS instances that are unstable and results may differs greatly. I would prefer publish results from more stable env. (like a bare metal server or desktop/notebook) w/a background activity on it.
If we will manage to make it cute and customizable than we can ask sbt-jmh developers to release it as an extension.
JMH Visualizer was used for plotting benchmark results: https://plokhotnyuk.github.io/jsoniter-scala/
Use scala-chart for automation of plotting of benchmark results