Closed GoogleCodeExporter closed 8 years ago
Tweaking the "specialize UNPACK_SEQUENCE for tuples and lists, but only inline
the
dispatch code" patch [1] blows the doors off the unpack_sequence microbenchmark
(90% faster), but doesn't translate into any increased performance for the
macrobenchmarks. If anything, it's slowing them down. The increase in IR size
is pretty
minimal, but I need to look into it more.
[1] - the basic idea: http://codereview.appspot.com/81041
Original comment by collinw
on 26 Jun 2009 at 12:07
Patch mailed out for review.
Original comment by collinw
on 27 Jun 2009 at 2:10
Fixed in r683:
trunk vs patch (-j always -O2)
unpack_sequence:
Min: 0.000216 -> 0.000088: 145.92% faster
Avg: 0.000219 -> 0.000090: 143.12% faster
Significant (t=8202.186864, a=0.95)
Patch by Alex Gaynor!
Original comment by collinw
on 1 Jul 2009 at 6:38
Original issue reported on code.google.com by
collinw
on 25 Jun 2009 at 1:33