Open abhradeepde123 opened 1 year ago
Ok fine, it's only because I found a really efficient way to tokenize (or lex) the .myopl files. Just ditch the current Lexer.make_tokens
function and instead implement something like this:
def make_tokens(self):
tokens = []
LEX_MAP = {
"+": lambda: tokens.append(Token(TokenType.ADD, "+")),
"-": lambda: tokens.append(Token(TokenType.SUB, "-")),
"*": lambda: tokens.append(Token(TokenType.MUL, "*")),
"/": lambda: tokens.append(Token(TokenType.DIV, "/"))
}
self.advance()
while self.current_char is not None:
if self.current_char in LEX_MAP.keys(): LEX_MAP[self.current_char]()
(you can add regex to this)
Not really a problem, but if you add it then people could make improvements to the code and basically turn it into an actual programming language. Thoughts, anyone?