lark-parser / lark

Lark is a parsing toolkit for Python, built with a focus on ergonomics, performance and modularity.
MIT License
4.93k stars 416 forks source link

Incorrect start_pos / end_pos in the tree #1406

Open munching opened 7 months ago

munching commented 7 months ago

Hi, I'd like to report a possible bug with token's attributes start_pos, end_pos and line. Below is an example to reproduce.

Grammar:

start: statement_list
statement_list: statement+
statement: variable "=" variable
variable: ID
ID: LETTER (LETTER | DIGIT)*
LETTER: /[a-zA-Z]/
DIGIT: /[0-9]/
%import common.WS
%ignore WS

Input: myvar1 = myvar2

Python code:

from lark import Lark

with open('grammar.lark') as f:
    grammar = f.read()

with open('input.txt') as f:
    input = f.read()

parser = Lark(grammar, start='start', parser='lalr')
tree = parser.parse(input)

for node in tree.iter_subtrees_topdown():
    print(node.data + " start: " + str(node.data.start_pos) + " end: " + str(node.data.end_pos) + " line: " + str(node.data.line))

I get the following output which is obviously incorrect:

start start: 0 end: 5 line: 1
statement_list start: 22 end: 36 line: 2
statement start: 49 end: 58 line: 3
variable start: 82 end: 90 line: 4
variable start: 82 end: 90 line: 4

The input has only one line and 15 symbols total. Does this look like a bug or am I misusing attributes? Thank you

MegaIng commented 7 months ago

node.data isn't part of the input, but of the grammar. Use node.meta instead.

@erezsh We really need to change it back so that Tree doesn't take Token, but just their values. Those being Token's results in many issues.

munching commented 7 months ago

Hi @MegaIng Thank you for the quick response. I totally misunderstood the "data" thing, now it works fine. Thank you very much for helping!

erezsh commented 7 months ago

@MegaIng Yeah, makes sense. The tokens of the parsed grammar aren't relevant to the output tree.

munching commented 7 months ago

Not sure what do you mean by that, I'm relatively new to Lark. But trying to access start_pos / end_pos of tokens is my attempt to bring comments into the tree that were ignored on parsing stage. I've done some research and it looks like there isn't a way to easily do that. I'm parsing a Pascal-like language and must use LALR parser because of its speed: my usual input is roughly 350 mb of source code and Earley works unacceptably slow. Tried to rewrite my grammar to not ignore the code but that's probably not possible with LALR. So in my case having tokens with information on where they were found in the input is the last hope of bringing in the comments.

MegaIng commented 7 months ago

@munching We aren't talking about your issue directly. If lark behaved correctly, you would have gotten an AttributeError from accessing those names on .data, which probably would have clued you in faster. That is what we are talking about fixing.

For collecting comments we normally suggest terminal callbacks, but that doesn't easily bring it into the tree, that is a bit of extra work.

erezsh commented 7 months ago

@munching You might find this useful: https://lark-parser.readthedocs.io/en/latest/recipes.html#collect-all-comments-with-lexer-callbacks

munching commented 7 months ago

@erezsh @MegaIng Thank you very much for the explanations! I'm actually already using terminal callbacks to collect comments and then knowing their start/end I go through the tree and try to find the "tightest" token that fully enclose my comment. Then the task is to figure out in between what children to put it to. And that's where I was stuck because data.start_pos was giving me seemingly irrelevant numbers :)

erezsh commented 7 months ago

There is some code that I wrote once that did something like it.

Maybe I should clean it up and add it to Lark, as a utility function.

I don't know if it will be helpful for you, but this is the code:

def assign_comments(tree, comments):
    nodes_by_line = classify(tree.iter_subtrees(), lambda t: getattr(t.meta, 'line', None))
    nodes_by_line.pop(None,None)
    rightmost_nodes = {line: max(nodes, key=lambda n: n.meta.column) for line, nodes in nodes_by_line.items()}
    leftmost_nodes  = {line: min(nodes, key=lambda n: (n.meta.column, -(n.meta.end_pos - n.meta.start_pos))) for line, nodes in nodes_by_line.items()}

    for c in comments:
        if c.line == c.end_line:
            n = rightmost_nodes[c.end_line]
            assert not hasattr(n.meta, 'inline_comment')
            n.meta.inline_comment = c
        else:
            if c.end_line not in leftmost_nodes:
                # Probably past the end of the file
                # XXX verify this is the case
                continue

            n = leftmost_nodes[c.end_line]
            header_comments = getattr(n.meta, 'header_comments', [])
            n.meta.header_comments = header_comments + [c]

P.S. classify() is basically like itertools.groupby but it returns a dict.