If I understand correctly, frak will generate a regex that matches the exact set of strings it was fed, right? It would also be nice to add a learning/generalization ability, so that e.g. if you feed it "foo1" and "foo2", etc., it can generalize to /foo\d+/ rather than the strict /foo[12]/.
At my previous job, I developed such a system but unfortunately the code never got out of the corporate walls. I'd feed the system a set of many thousands of legal citation strings (like "726 N.W.2d 852" or "42 U.S.C. § 405(c)(2)(C)" - see http://www.law.cornell.edu/citation/ for many more such examples) and it would create a regular expression to recognize these strings and any strings "in the same family". The idea was to combine a lexer (something that knew about the lexemes that could occur in the language - e.g. numbers, roman numerals, publication names, state names, etc.) with a regex generator based on the lexeme classes rather than raw characters.
Some of the difficulties involved figuring out just how much to generalize, which of course depends on the problem domain. There were also ambiguous lexemes - for example, "MD" is both a roman numeral and a state abbreviation, so I had to sense which made better sense in context.
The regexes got pretty big, even with the generalization (which tends to make them smaller), so I made one FSM per jurisdiction and fed them to GraphViz for some killer pictures. =)
Anyway, food for thought if you ever want to take this project in another direction.
Nice project.
If I understand correctly, frak will generate a regex that matches the exact set of strings it was fed, right? It would also be nice to add a learning/generalization ability, so that e.g. if you feed it
"foo1"
and"foo2"
, etc., it can generalize to/foo\d+/
rather than the strict/foo[12]/
.At my previous job, I developed such a system but unfortunately the code never got out of the corporate walls. I'd feed the system a set of many thousands of legal citation strings (like "726 N.W.2d 852" or "42 U.S.C. § 405(c)(2)(C)" - see http://www.law.cornell.edu/citation/ for many more such examples) and it would create a regular expression to recognize these strings and any strings "in the same family". The idea was to combine a lexer (something that knew about the lexemes that could occur in the language - e.g. numbers, roman numerals, publication names, state names, etc.) with a regex generator based on the lexeme classes rather than raw characters.
Some of the difficulties involved figuring out just how much to generalize, which of course depends on the problem domain. There were also ambiguous lexemes - for example, "MD" is both a roman numeral and a state abbreviation, so I had to sense which made better sense in context.
The regexes got pretty big, even with the generalization (which tends to make them smaller), so I made one FSM per jurisdiction and fed them to GraphViz for some killer pictures. =)
Anyway, food for thought if you ever want to take this project in another direction.