fnl / syntok

Text tokenization and sentence segmentation (segtok v2)
MIT License
200 stars 34 forks source link

Benchmark against pragmatic segmenter #2

Open Immortalin opened 5 years ago

fnl commented 5 years ago

Good point; When I developed the first version, segtok, there were no good benchmark datasets for sentence segmentation around that had sufficient coverage of the tricky cases this library can do. That is, all I found were examples of trivial sentence segmentation problems that virtually any statistical tagger can do well on, too. But if someone has a pointer to a really tough test set with stuff like author abbreviations, enumerations, typos, mathematical and scientific content, and/or social domain text (that might be abusing sentence terminal markers), that would be worth adding. Otherwise, I think the 50+ test cases I have collected as examples of such problems are my current "benchmark": I haven't found a single other library that can do all those cases.

fnl commented 5 years ago

The above being said, what I am currently not interested in or would have time to do is go compare my library manually against another, case-by-case. So if someone wants to fulfill the specific request made by Immortalin here (or you yourself?), please feel free to make that comparison, though. I am sure either library will have its particular strengths.

But that being said, for an unbiased comparison, what would be more important is an impartial sentence segmentation dataset that covers the more tricky cases we find in the wild.

fnl commented 4 years ago

Another interesting tool to compare/benchmark against: https://github.com/nipunsadvilkar/pySBD

Note that pySBD is supposedly based on the Pragmatic Segmenter.

reepush commented 4 years ago

For my use case syntok works just perfect. Thanks @fnl for this project!