Closed gavvvr closed 6 years ago
Interesting. I don't think this is what we intended save/reset to be used for; it's supposed to help chain together multiple "chunks", while preserving line/column information.
I can certainly see why it's confusing that it doesn't work that way!
You might like to also save lexer.index
, and then when resetting, do something like lexer.reset(sourceStr.slice(savedIndex), savedInfo)
.
Also: as an efficiency thing, note that you are re-doing work here; Moo will have to parse the tokens all over again! You might instead like to write a small wrapper on top of Moo, that keeps a history of tokens, and allows you to "rewind" to an earlier point in the token stream.
Well, I checked that I can do 'backtracking' with preserving index. Thank you!
Hi. I use Moo as a tokenizer and try to write parser by myself. Sometimes I would like to go back to a previous token. From description looks like save()/reset() should fit my requirements. But unfortunately call to reset() makes lexer to start from beginning (as if I call it with a single string argument)m while I expect it to start from saved state. I prepared a quick example here: https://runkit.com/embed/ux5mwd86wks6 . Please let me know if I miss something and try to use it in a wring way.