Closed dtolnay closed 5 years ago
One thing I should note on this is that I was surprised that #[derive(Serialize)]
called this function, but then I remembered this is part of quote!
, which also lends itself well to reasoning that this may be relatively hot if it's called in quote!
a lot. In that sense I think we could perhaps speed up but a mild amount, but as the linked post shows it's only 1-2ms for a massive macro.
Does that include time spent running watt::sym::token_stream_serialize
to get the parsed data into wasm? Running parsing in wasm should eliminate a bunch of calls to these other methods, which might increase the impact a bit.
In addition, given the higher minimum rustc version for watt
, it may be reasonable to use rustc_lexer
as the lexer within wasm, probably based on the work in https://github.com/alexcrichton/proc-macro2/pull/202. I'm guessing it's been optimized a bit more than strnom
.
Oh right that's an excellent point! Yes the timing information of watt::sym::token_stream_parse
doesn't include the timing of watt::sym::token_stream_serialize
nor the time to deserialize inside of wasm itself. Those would all largely be avoided if we move this to wasm. While still likely to be modest, I think it'd be a clear win speed-wise.
@dtolnay benchmarked an impl of this in https://github.com/dtolnay/watt/pull/26#pullrequestreview-310856210, and found it had equivalent (or worse) performance, even on large inputs. I'm inclined to think that we should leave the token_stream_parse
implementation implemented by the host runtime.
As noted in https://github.com/dtolnay/watt/issues/2#issuecomment-548018670,
We'll need to import something like https://github.com/alexcrichton/proc-macro2/blob/1.0.6/src/strnom.rs and use that to implement token stream parsing rather than calling out to a host func.
It's possible we can simplify the implementation from proc-macro2 by omitting the parts that deal with parsing comments, as those won't appear in our use case.