psf / pyperf

Toolkit to run Python benchmarks
http://pyperf.readthedocs.io/
MIT License
799 stars 78 forks source link

Recognize graalpython as a python impl with jit #119

Closed timfel closed 2 years ago

vstinner commented 2 years ago

Oh, nice to see people using pyperf on GraalPython :-) Did you have at look at warmup runs and value runs to see if the timing are steady at some points? My latest study was https://vstinner.readthedocs.io/pypy_warmups.html

The https://arxiv.org/abs/1602.00602 paper disappointed me :-D I tried to implement the changepoint analysis but I failed to find an algorithm for that.

I would also be nice to compute if the distribution is bi-modal (or multi-modal), but again, I don't know how to compute that.

corona10 commented 2 years ago

@timfel Thank you for the patch. As a big fan of the GraalPython project, I am really happy with the PR. Do you need this feature officially right now? if so I would like to publish the 2.3.1 by this week.

timfel commented 2 years ago

Do you need this feature officially right now? if so I would like to publish the 2.3.1 by this week.

@corona10 I don't need it officially very urgently, but within the next couple of months would be nice :) Until then I've just installed the package off master for local testing

Did you have at look at warmup runs and value runs to see if the timing are steady at some points?

@vstinner I have taken your script and logged the warmup, loops, and values calculated in a pybenchmark run for GraalPython. They are roughly the same as for PyPy, sometimes a bit more, sometimes a bit less. Same as PyPy, some benchmarks are also extremely noisy. I plan to do a bit more analysis, we are just preparing our New Years release right now, so I won't be able to spend too much time on this until around Christmas.

vstinner commented 2 years ago

This change is now part of the just released pyperf 2.3.1.