ropensci / unconf17

Website for 2017 rOpenSci Unconf
http://unconf17.ropensci.org
64 stars 12 forks source link

Track package performance over time #56

Open jimhester opened 7 years ago

jimhester commented 7 years ago

Covr and codecov.io are great for tracking code coverage during package's development. Another aspect of a package that would be useful to track is the performance of one or more benchmark functions.

This is useful for package authors to ensure they don't inadvertently introduce a performance regression when adding new features. Also useful for users to see if how much a new version improves or reduces performance. Could also running the benchmarks when a PR is submitted, to see how the changes impact current performance.

I wrote a rough example at https://github.com/jimhester/benchthat and @krlmlr has dplyr specific code to do this at https://krlmlr.github.io/dplyr.benchmark/.

Some useful features to me would be

  1. Store the results in a easy to parse file in the repository (My draft puts them in /docs/benchmarks)
  2. Helper functions that are easy to run automatically in a package's tests.
  3. Run a benchmark retroactively over the repo history.
    • Is there a peak finding algorithm / git bisect we could use to find performance breakpoints so you don't have to exhaustively benchmark each commit?
  4. Visualizing and reporting on benchmark results.
noamross commented 7 years ago

There's also https://github.com/analyticalmonk/Rperform

jimhester commented 7 years ago

Rperform seems like it already does most of this, but clearly needs more exposure / use and possibly some thought into better integration into pkgdown / travis so it is more useful for PR results.

jennybc commented 7 years ago

I love this idea!

Question re: outside support:

codecov.io is to code coverage as ??? is to benchmarking

Or does this aspect have to be handled by the package described here? The display of results over time could potentially be handled in pkgdown site.

jimhester commented 7 years ago

I don't know of anything like codecov.io for tracking benchmarking over time. If there was something we could use it or maybe setup a simple service to do so.

gaborcsardi commented 7 years ago

codecov.io is to code coverage as ??? is to benchmarking

https://github.com/tobami/codespeed is only one I know of. You need to run your own service.

Julia used to run it, I don't know if they still do.

gaborcsardi commented 7 years ago

The Julia site used to be at http://speed.julialang.org/

It is gone.

jsta commented 7 years ago

Seems to me that it would be logical to integrate performance testing with testthat.