I am not an expert in lexing/parsing, but I think this can be improved. In the example below, there are two parsing matchers, and they both essentially do the same thing. word1 matches a sequence of word2s, and word2 matches a sequence of characters. In this example, it is impossible to tell where the words should be broken up; any sequence of characters could be divided into quite a few sequences of words. This fails to compile (message below) with an unhelpful message that could probably be improved.
import patty
import nimly
import unittest
variantp FluentToken:
Character(character: char)
niml fluentLexer[FluentToken]:
r"[A..Za..z1..9\-]":
Character(token.token[0])
nimy fluentParser[FluentToken]:
top[seq[string]]:
word1:
return $1
word1[seq[string]]:
word2{}:
return $1
word2[string]:
Character{}:
var str = ""
for character in $1:
str &= character.character
return str
test "test":
var testLexer = fluentLexer.newWithString("testing")
var parser = fluentParser.newParser()
discard parser.parse(testLexer)
I am not an expert in lexing/parsing, but I think this can be improved. In the example below, there are two parsing matchers, and they both essentially do the same thing.
word1
matches a sequence ofword2
s, andword2
matches a sequence of characters. In this example, it is impossible to tell where the words should be broken up; any sequence of characters could be divided into quite a few sequences of words. This fails to compile (message below) with an unhelpful message that could probably be improved.Outputs: