Open GoogleCodeExporter opened 9 years ago
[deleted comment]
I had a slightly better, but still not accurate desugaring in a diffent tab:
function f() {
var fp = 0;
var state = 'newborn';
return {
next: function() {
if (state == 'executing' || state == 'closed')
throw new Error();
switch (fp) {
case 0:
fp++;
state = 'suspended';
return {value: 'a', done: false};
case 1:
state = 'closed';
return {value: 'b', done: true};
}
}
};
}
Original comment by arv@chromium.org
on 13 Mar 2013 at 6:16
I neglected to check my email prior to submitting a patch series related
to StopIteration. Please ignore that patch series, in light of this new
development.
I also have
- a 6-patch series that factors out the generator common code into a
runtime function,
- a 1-patch that breaks out the main state machine into a separate
function,
This speeds up a simple var-increment benchmark by 25%, and the
advantage should only increase as the generator does more work.
- and a 1-patch that removes do-nothing states from the state machine.
This one doesn't actually seem to improve performance visibly. It
does make the state machine code nicer to read, though.
So just a quick request that these patches be handled before we go into
converting over to the new protocol. My main reason, aside from
convenience, is that I would like to have a (somewhat) optimized
baseline with which to benchmark against the new protocol.
----
Looking forward to reading the meeting summaries for this. Interested in
finding out just how this sudden turn of events managed to happen.
Part of me is a little worried about the small-object creation and gc
overhead (two-method is probably easier to optimize), but I think that
even assuming it's slow now, it'll be much easier for JS engines to
optimize in the future, compared to using exceptions.
Original comment by usrbi...@yahoo.com
on 13 Mar 2013 at 9:30
There is still some churn here... I'll post updates here.
Original comment by arv@chromium.org
on 13 Mar 2013 at 10:42
I did a benchmark of a few different iteration protocols, at least three
different ways for each protocol.
The significance probably isn't that great in deciding how fast
something will be when implemented natively, but some data at least.
Testing the use of github gists.
https://gist.github.com/usrbincc/5164116
Sample test run:
#--cut--
classic StopIteration: loop
224 (average of 224, 221, 222, 236, 219)
classic StopIteration: loopFor
152 (average of 153, 152, 150, 150, 156)
classic StopIteration: loopInline
233 (average of 224, 259, 222, 230, 231)
classic StopIteration: loopInlineFunc
53 (average of 56, 53, 53, 53, 52)
{value: val, done: bool} new object: loop
363 (average of 379, 369, 347, 354, 366)
{value: val, done: bool} new object: loopFor
353 (average of 363, 352, 369, 339, 346)
{value: val, done: bool} new object: loopInline
307 (average of 211, 328, 308, 340, 348)
{value: val, done: bool} reuse object: loop
181 (average of 186, 183, 180, 179, 181)
{value: val, done: bool} reuse object: loopFor
182 (average of 180, 195, 179, 183, 177)
{value: val, done: bool} reuse object: loopInline
89 (average of 88, 86, 87, 88, 96)
sentinel StopIteration: loop
197 (average of 194, 191, 215, 194, 194)
sentinel StopIteration: loopFor
150 (average of 150, 154, 149, 151, 149)
sentinel StopIteration: loopInline
101 (average of 102, 102, 101, 100, 101)
moveNext, current: loop
157 (average of 162, 161, 153, 156, 156)
moveNext, current: loopFor
161 (average of 155, 149, 160, 187, 154)
moveNext, current: loopInline
51 (average of 51, 53, 52, 52, 50)
pure JS: loop
23 (average of 24, 23, 22, 23, 25)
#--cut--
Preliminary conclusion: try-catch is slow, but not as slow as making
thousands of new objects. I can't speak to the validity of the hand-
inlined tests, but it's surprising how well V8 can optimize when you
move code into a separate function.
Original comment by usrbi...@yahoo.com
on 14 Mar 2013 at 7:17
Original issue reported on code.google.com by
arv@chromium.org
on 13 Mar 2013 at 6:11