Open axelkar opened 1 month ago
You’re completely right, I missed that!
As you might be able to tell I’ve stopped working on the tutorial series, so I think I would rather keep everything as-is as a kind of historical archive, rather than a maintained project. If I were to fix things like this, I’d have to write up new posts explaining the issue and the fix.
I hope you understand :)
You’re completely right, I missed that!
As you might be able to tell I’ve stopped working on the tutorial series, so I think I would rather keep everything as-is as a kind of historical archive, rather than a maintained project. If I were to fix things like this, I’d have to write up new posts explaining the issue and the fix.
I hope you understand :)
Yeah it's all good! Thank you very much for the series! It really helped me understand rust-analyzer's parsing architecture.
Think about the following file in an imaginary language, where the grammar only matches an identifier and nothing else:
(No trailing newline, token list after lexing: [Whitespace, Ident])
The grammar module uses
Source::next_token()
, which skips trivia first and then returns the non-trivia tokenSyntaxKind::Ident
. Before the sink step, the event list looks like this:[Event::AddToken]
. When the sink receives theAddToken
event, it pops off the token at index 0 from the token list, which happens to beSyntaxKind::Whitespace
. After that there is no trivia to eat (Sink::eat_trivia
) and there are no more events. In the end, the CST doesn't even contain the Ident token.I think that moving eat_trivia to before the match statement works. Even better is that it'd be after
StartNode
, but beforeAddToken
so comments are added into the node.