no-context / moo

Optimised tokenizer/lexer generator! 🐄 Uses /y for performance. Moo.
BSD 3-Clause "New" or "Revised" License
824 stars 66 forks source link

Question: how do I use save/reset for 'backtracking'? #89

Closed gavvvr closed 6 years ago

gavvvr commented 6 years ago

Hi. I use Moo as a tokenizer and try to write parser by myself. Sometimes I would like to go back to a previous token. From description looks like save()/reset() should fit my requirements. But unfortunately call to reset() makes lexer to start from beginning (as if I call it with a single string argument)m while I expect it to start from saved state. I prepared a quick example here: https://runkit.com/embed/ux5mwd86wks6 . Please let me know if I miss something and try to use it in a wring way.

tjvr commented 6 years ago

Interesting. I don't think this is what we intended save/reset to be used for; it's supposed to help chain together multiple "chunks", while preserving line/column information.

I can certainly see why it's confusing that it doesn't work that way!

You might like to also save lexer.index, and then when resetting, do something like lexer.reset(sourceStr.slice(savedIndex), savedInfo).

Also: as an efficiency thing, note that you are re-doing work here; Moo will have to parse the tokens all over again! You might instead like to write a small wrapper on top of Moo, that keeps a history of tokens, and allows you to "rewind" to an earlier point in the token stream.

gavvvr commented 6 years ago

Well, I checked that I can do 'backtracking' with preserving index. Thank you!