microsoft / ts-parsec

Writing a custom parser is a fairly common need. Although there are already parser combinators in others languages, TypeScript provides a powerful and well-structured foundation for building this. Common parser combinators’ weakness are error handling and ambiguity resolving, but these are ts-parsec’s important features. Additionally, ts-parsec provides a very easy to use programming interface, that could help people to build programming-language-scale parsers in just a few hours. This technology has already been used in Microsoft/react-native-tscodegen.
Other
357 stars 18 forks source link

Nongreedy rules #51

Closed peterbud closed 1 year ago

peterbud commented 1 year ago

I was struggling to create nongreedy rules in the parser, like described here with ANTLR4: https://github.com/antlr/antlr4/blob/master/doc/wildcard.md#wildcard-operator-and-nongreedy-subrules

Is there a designated way to achieve this? For context: I'd like to extract some information from a text without defining the whole grammar.

ZihanChen-MSFT commented 1 year ago

I think it works like rep which gives you all results and rep_sc gives you the "most greedy" result. Do you have a more specific requirement?

peterbud commented 1 year ago

Consider the following input:

Dear customer, here is the statement of your account as of 01.01.2023. more text to discard...

Account statement EUR

The following transaction were recorded... even more text to discard

01.12.2022 Transfer 100
03.12.2022 Transfer 300

Sincerely, more text to discard

The first requirement to be able to consume any token in a non-greedy way, like at the beginning of the input. I have not seen any function which would match any token, I have created a function which does that - I have called as any - maybe there are better way to do that?

Then I have tried the following lexar/parser:

export enum TokenKind {
  DMY,
  Number,
  Header,
  Other,
  Space,
}

export const lexer = buildLexer([
  [true, /^\d{2}\.\d{2}\.\d{4}/g, TokenKind.DMY],
  [true, /^(\d*\.?)*(,\d*)?( *-)?/g, TokenKind.Number],
  [true, /^account statement/gi, TokenKind.Header],

  [true, /^\S+/g, TokenKind.Other], // all other words
  [false, /^\s+/g, TokenKind.Space], // ignore whitespace
])

const STATEMENT = rule<TokenKind, unknown>()
STATEMENT.setPattern(
  seq(
    rep(any()),
    tok(TokenKind.Header),
    rep(any()),
    rep(seq(tok(TokenKind.DMY), str("Transfer"), tok(TokenKind.Number))),
  ),
)

The problem is that rep() is greedy in a sense that it will match all tokens, although what I'm looking for is to match until it finds the first Header token. Then again I'd need to consume all tokens until I find the first date token which marks the beginning of the transactions.

What would be your suggestion for this problem?

ZihanChen-MSFT commented 1 year ago

I feel like this is overusing, that's why there is no any. I would just split the text into lines first and match each line and keep all succeeded ones. And I think regular expression is simpler for this particular input.