I am not suggesting merging this as-is. This was helpful enough for me as I was debugging a large parser with a 200k+ tokens long input (multi MB input file, with a lexer that can only do so much without parsing context). I wanted to give back to the community. If this is useful for other people, maybe this can be used as a basis or inspiration for a stable version.
The general design principle behind the rollback tracing was to be able to answer the question "Why couldn't the next token in the input stream be parsed?", by keeping track of all of the high-level parsers that attempted to parse the next token (or the sequence of tokens starting with the next token) and why each attempt failed.
I am not suggesting merging this as-is. This was helpful enough for me as I was debugging a large parser with a 200k+ tokens long input (multi MB input file, with a lexer that can only do so much without parsing context). I wanted to give back to the community. If this is useful for other people, maybe this can be used as a basis or inspiration for a stable version.
The general design principle behind the rollback tracing was to be able to answer the question "Why couldn't the next token in the input stream be parsed?", by keeping track of all of the high-level parsers that attempted to parse the next token (or the sequence of tokens starting with the next token) and why each attempt failed.