Open Codeguyross opened 5 years ago
I ran into the same issue. Here are the tokens just for the first line:
LexToken(IDENT,'temp',1,0)
LexToken(LPAREN,'(',1,4)
LexToken(IDENT,'temp',1,5)
LexToken(LT,'<',1,9)
LexToken(EQ,'=',1,10)
LexToken(NUMBER,0,1,11)
LexToken(RPAREN,')',1,12)
LexToken(EQ,'=',1,14)
LexToken(LBRACKET,'[',1,16)
LexToken(RBRACKET,']',1,17)
LexToken(SEMI,';\n',1,18)
The tokens LT
and EQ
are detected instead of the token LE
. I don't know how to fix that in PLY and asked the question there: https://github.com/dabeaz/ply/issues/207
@andreasg123 @Codeguyross I ran into the same issue. Were you able to find a solution?
No
I foresee the same issue in my code. It isn't written yet.
I have no problem with this.
tokens = (
'DEFINE', 'PRAGMA',
'LSCOPE', 'RSCOPE',
'LPAREN', 'RPAREN',
'PLUS', 'MINUS',
'EQ', 'NE',
'INV', 'NOT',
'TIMES', 'DIVIDE', 'MOD',
'LSHIFT', 'RSHIFT',
'LE', 'GE',
'LT', 'GT',
'LAND',
'LOR',
'AND',
'XOR',
'OR',
'ASSIGN',
'COMMA',
'COLON',
'STAT',
'NAME', 'NUMBER',
)
t_DEFINE = r'\#define'
t_PRAGMA = r'\#pragma'
t_LSCOPE = r'\{'
t_RSCOPE = r'\}'
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_PLUS = r'\+'
t_MINUS = r'-'
t_EQ = r'=='
t_NE = r'!='
t_INV = r'~'
t_NOT = r'!'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_MOD = r'%'
t_LSHIFT = r'<<'
t_RSHIFT = r'>>'
t_LE = r'<='
t_GE = r'>='
t_LT = r'<'
t_GT = r'>'
t_LAND = r'&&'
t_LOR = r'\|\|'
t_AND = r'&'
t_XOR = r'\^'
t_OR = r'\|'
t_ASSIGN = r'='
t_COMMA = r','
t_COLON = r':'
t_STAT = r';'
t_NAME = r'[a-zA-Z_][a-zA-Z0-9_]*'
def t_NUMBER(t):
r'\d*\.\d*|\d+'
if "." in t.value:
t.value = float(t.value)
else:
t.value = int(t.value)
return t
t_ignore = ' \t'
def t_newline(t):
r'[\r\n]+'
t.lexer.lineno += len(t.value)
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
lexer = lex.lex()
test = "< <= > >= ! != == ="
lexer.input(test)
while True:
tok = lexer.token()
if not tok:
break
print(tok)
When building the master regular expression, rules are added in the following order:
All tokens defined by functions are added in the same order as they appear in the lexer file. Tokens defined by strings are added next by sorting them in order of decreasing regular expression length (longer expressions are added first). Without this ordering, it can be difficult to correctly match certain types of tokens. For example, if you wanted to have separate tokens for "=" and "==", you need to make sure that "==" is checked first. By sorting regular expressions in order of decreasing length, this problem is solved for rules defined as strings. For functions, the order can be explicitly controlled since rules appearing first are checked first. Source: https://www.dabeaz.com/ply/ply.html
I found the answer here: https://stackoverflow.com/questions/58867582/smop-has-issues-when-translating-a-statment-in-matlab
Just edit lexer.py and write a "\" before "<=" and ">="
Ex:
t_LE = r'\<='
t_GE = r'\>='
There is a fork of stop which may have solved your problem. See: https://github.com/PatrickFURI/smop Regards Rob
RobBWilkinson@gmail.com
On 18/02/2020, at 09:06, mateuszanatta notifications@github.com wrote:
I ran into this issue and tried bckpkol solution. Rearranged the expressions, EQ, LE, and GE, however, I still cannot convert ">=" or "<=".
Has someone found the solution?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/victorlei/smop/issues/138?email_source=notifications&email_token=ABRNQJSJ47FUZXD24QQM7C3RDLU5JA5CNFSM4HKWEAS2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEL7RBLY#issuecomment-587141295, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABRNQJWDMNHQDCWIDKI4ZGLRDLU5JANCNFSM4HKWEASQ.
Examples of not working when converting from matlab to python when encountering '>=' and '<='