gauravssnl / unladen-swallow

Automatically exported from code.google.com/p/unladen-swallow
Other
0 stars 0 forks source link

30-60% regression in UNPACK_SEQUENCE vs 2009Q1 #65

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
The new unpack_sequence microbenchmark in perf.py confirms the 
findings of PyBench that were discussed in issue52:

2009Q1 vs trunk@r669 -j never
unpack_sequence:
Min: 0.000081 -> 0.000116: 30.25% slower
Avg: 0.000083 -> 0.000118: 29.49% slower
Significant (t=-338.320084, a=0.95)

2009Q1 vs trunk@r669 -j always -O2
unpack_sequence:
Min: 0.000081 -> 0.000212: 61.75% slower
Avg: 0.000083 -> 0.000216: 61.42% slower
Significant (t=-1792.407645, a=0.95)

Original issue reported on code.google.com by collinw on 25 Jun 2009 at 1:33

GoogleCodeExporter commented 8 years ago
Tweaking the "specialize UNPACK_SEQUENCE for tuples and lists, but only inline 
the 
dispatch code" patch [1] blows the doors off the unpack_sequence microbenchmark 
(90% faster), but doesn't translate into any increased performance for the 
macrobenchmarks. If anything, it's slowing them down. The increase in IR size 
is pretty 
minimal, but I need to look into it more.

[1] - the basic idea: http://codereview.appspot.com/81041

Original comment by collinw on 26 Jun 2009 at 12:07

GoogleCodeExporter commented 8 years ago
Patch mailed out for review.

Original comment by collinw on 27 Jun 2009 at 2:10

GoogleCodeExporter commented 8 years ago
Fixed in r683:

trunk vs patch (-j always -O2)
unpack_sequence:
Min: 0.000216 -> 0.000088: 145.92% faster
Avg: 0.000219 -> 0.000090: 143.12% faster
Significant (t=8202.186864, a=0.95)

Patch by Alex Gaynor!

Original comment by collinw on 1 Jul 2009 at 6:38