Closed mohankumaru closed 4 years ago
The generator object created by .tokenize
is being exhausted.
Python 3.8.1 (default, Feb 13 2020, 10:50:34)
[Clang 11.0.0 (clang-1100.0.33.8)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from scim2_filter_parser import lexer
>>> from scim2_filter_parser import parser
>>> filter = 'userName eq "bjensen"'
>>> token_stream = lexer.SCIMLexer().tokenize(filter)
>>> for token in token_stream:
... print(token)
...
Token(type='ATTRNAME', value='userName', lineno=1, index=0)
Token(type='EQ', value='eq', lineno=1, index=9)
Token(type='COMP_VALUE', value='bjensen', lineno=1, index=12)
>>> next(token_stream, None) is None
True
You'll need re-instantiate the token_stream
object to avoid this error.
>>> from scim2_filter_parser import ast
>>> token_stream = lexer.SCIMLexer().tokenize(filter)
>>> ast_nodes = parser.SCIMParser().parse(token_stream)
>>> for depth, node in ast.flatten(ast_nodes):
... print(' ' * depth, node)
...
Filter(expr=AttrExpr, negated=False, namespace=None)
AttrExpr(value='eq', attr_path=AttrPath, comp_value=CompValue)
AttrPath(attr_name='userName', sub_attr=None, uri=None)
CompValue(value='bjensen')
Hope that helps. Feel free to reopen if it does not.
Hi,
Following is the code for tokenizing and parsing,
The tokenizing step is working, but the line
ast_nodes = parser.SCIMParser().parse(token_stream)
gives me exception.What am i missing here