rajat4493 / unladen-swallow

Automatically exported from code.google.com/p/unladen-swallow
0 stars 0 forks source link

On-stack replacement for running generators #114

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
html5lib has two very hot generators that are called once at the beginning of 
the program, then 
yield hundreds of thousands of times. The hotness model currently identifies 
these functions as 
worthy of optimization, but because they're never properly called again, the 
functions are not 
compiled to machine code.

We should check co_hotness on generator reentry so that we can potentially 
recompile these 
functions and replace them while the generator instance is live.

Hot functions:
- html5lib/tokenizer.py:59(__iter__)
- html5lib/html5parser.py:196(normalizedTokens)

Original issue reported on code.google.com by collinw on 12 Jan 2010 at 2:47

GoogleCodeExporter commented 8 years ago
Out of curiosity: would it make sense to compile all generators unconditionally 
on
the first call?

Generators are pretty rare (e.g. I counted ~5300 " def " statements in stdlib 
but
only ~130 yield statements.

Original comment by ilya.san...@gmail.com on 13 Jan 2010 at 3:27

GoogleCodeExporter commented 8 years ago
No, I dont' think it would, since we'd lose all the profiling data.

Original comment by alex.gay...@gmail.com on 13 Jan 2010 at 4:50

GoogleCodeExporter commented 8 years ago
I've started working on this.   I think what we should do is split
mark_called_and_maybe_compile into separate functions called mark_called and
maybe_compile.  Then we can move maybe_compile into PyEval_EvalFrame and leave
mark_called in PyEval_EvalCodeEx, so that when we reenter a generator (which 
enters
at PyEval_EvalFrame) we check the hotness.  Sound good?  :)  Patch coming later 
today...

Original comment by reid.kle...@gmail.com on 13 Jan 2010 at 6:41

GoogleCodeExporter commented 8 years ago

Original comment by collinw on 15 Jan 2010 at 4:49

GoogleCodeExporter commented 8 years ago
Fixed in r1032.

### html5lib_warmup ###
Min: 12.435418 -> 12.430771: 1.0004x faster
Avg: 13.130518 -> 13.066833: 1.0049x faster
Not significant
Stddev: 1.45305 -> 1.15964: 1.2530x smaller
Timeline: http://tinyurl.com/ykyvzxn

Let's look at that first run in more detail:

### html5lib ###
Min: 17.347363 -> 16.364512: 1.0601x faster
Avg: 17.382957 -> 16.393608: 1.0603x faster
Significant (t=85.867780, a=0.95)
Stddev: 0.02557 -> 0.02596: 1.0154x larger
Timeline: http://tinyurl.com/yconbz5

My testing shows this shaving a full second off the html5lib runtime. Woot!

Original comment by collinw on 22 Jan 2010 at 10:48