tc39 / proposal-binary-ast

Binary AST proposal for ECMAScript
956 stars 23 forks source link

Clarify experiment results #8

Open domenic opened 7 years ago

domenic commented 7 years ago

The time required to create a full AST (without verifying annotations) was reduced by ~70-90%, which is a considerable reduction since parsing time in SpiderMonkey for the plain JavaScript was 500-800 ms for the benchmark.

Is "the time required to create a full AST" 500-800 ms? Or is it a subset of that? Maybe stating the actual reduction in milliseconds would be helpful.

kannanvijayan-zz commented 7 years ago

@domenic

Yeah the phrasing there is poor. In the meantime, looking at the bugzilla bug for the prototype, the 500-800ms is for AST construction, not full parse. I'll fix the wording for now, and we'll work on getting a proper table of numbers together.

See: https://bugzilla.mozilla.org/show_bug.cgi?id=1349917#c30

With this patch, we get the following speedups on the (2.5Mb gzipped) of Facebook source code Facebook, measuring the AST creation phase (which takes ~500-800 ms from source)

Yoric commented 7 years ago

I'll try to be as precise as possible.

On the SpiderMonkey side, the 500-800ms covered:

Everything was done from/to memory, so file I/O was not included. This was benchmarked on a minified Facebook chat.

The prototype side implemented:

Additional information:

ojhunt commented 7 years ago

Do you have any performance numbers for other implementations? even just JS parsed by other engines vs. binary ast in your implementation?

ojhunt commented 7 years ago

Oh, I see "The time required to create a full AST (without verifying annotations) was reduced by ~70-90%, which is a considerable reduction since SpiderMonkey's AST construction time for the plain JavaScript was 500-800 ms for the benchmark."

I don't feel this as a useful performance comparison because when writing the JSC parser I found that the semantic analysis and error checking was the bulk of parse time. In general parse time (actual lexical analysis and parsing) are linear to code size, even if there are theoretical super-linear paths they aren't actually hit in normal code.

Also, I'm unsure what numbers I should be looking at as the numbers in "news bench" don't clearly separate out what is being measured -- it's also unclear (given the size of the code and the content involved) if it is doing different things in different browsers, but that's a general case benchmark issue when dealing with actual content.

Yoric commented 7 years ago

More numbers are coming, once we have sufficiently progressed with the advanced prototype.