OData / OData.Neo

111 stars 18 forks source link

Why use manual (lexing/tokenization) and parsing? #24

Closed Gigabyte0x1337 closed 2 years ago

Gigabyte0x1337 commented 2 years ago

From what i am seeing is that we are trying to parse the query manually, but there are tools like antlr that can parse based on a grammer but for code for example. https://github.com/luca-vercelli/odata-jpa-mini/blob/master/odata-jpa-mini/src/main/antlr4/odata/antlr/ODataLexer.g4 This isn't exactly what you need because functions are hard coded, but you can make that dynamic.

Is there a reason for doing it manually?

TehWardy commented 2 years ago

The idea is to break the problem down in to it's component parts instead of relying on a black box solution to do this for us. Once this process is complete this will open up our options to consume OData in much more flexible ways and as per our last video session combine this technology with other technologies in more hybrid scenarios.

Having people like @xuzhg on board with this project also gets us keen insight from the point of view of an existing implementation allowing us to further grow and possibly improve on the solutions from the past by leveraging lessons learnt.

As always software is not a one and done solution, the key here is also to show / teach the community how something as complex as OData can be achieved.