Closed carljm closed 1 year ago
The 3.12 failure looks like an issue building greenlet, not related to this PR.
Yep, see https://github.com/python/pyperformance/issues/263 for 3.12+greenlet.
Nice I will take a look
Overall looks good, but why not just replace the Richards to use super()? I feel the running time of benchmarks is increased these days, due to the increasing number of benchmarks. Adding benchmarks is good, but let's avoid the duplicated similar workload added if possible.
cc @pablogsal
Overall looks good, but why not just replace the Richards to use super()?
I think the main reason is because this means Richards timing is no longer comparable to previous timings? (I'm not sure if this is an issue or not; I'm quite happy to just change Richards if that's ok with the maintainers.)
This looks good to me. I agree that adding (rather than replacing) is the way to go here. We can always remove the old benchmark once we learn more from this one.
Currently, none of the benchmarks in pyperformance exercise
super()
method calls significantly. But real-world modern object-oriented Python code often does (one source: Instagram server codebase uses lots ofsuper()
method calls, some in hot paths.)To correct this gap in visibility, this PR adds a copy of the Richards benchmark that is modified to use
super()
calls. Existing super-method calls that were done by naming the base class directly (Task.__init__(self, ...)
) are modified to usesuper()
, and the already-polymorphicTask.fn
method is modified slightly such that overrides now call up to the base method usingsuper()
.If we accept that Richards is a roughly reasonable representation of some object-oriented Python code, I think this is a similarly reasonable representation of the real-world use of
super()
in such code.Thanks to @pablogsal for this suggestion.