rth / vtext

Simple NLP in Rust with Python bindings
Apache License 2.0
147 stars 11 forks source link

Add sentence splitter #51

Closed rth closed 4 years ago

rth commented 5 years ago

It would be useful to add a sentence splitter, for instance, possibilities could be,

joshlk commented 4 years ago

spaCy has two sentence segmentation implementations. The default is based on the dependency parser which requires a statistical model. The second implementation is a simpler splitter based on punctuation (default is ".", "!", "?"). (https://spacy.io/usage/linguistic-features#sbd)

On another note it looks like the Unicode sentence boundaries from unicode-rs/unicode-segmentation#24 has been implemented. I could look at how to incorporate this into this library?

rth commented 4 years ago

The second implementation is a simpler splitter based on punctuation (default is ".", "!", "?").

I think you can do this already with the RegexpTokenizer using something like,

let tokenizer = RegexpTokenizerParams:default()
    .pattern(r"[^\.!\?]".to_string())
    .build()
    .unwrap();
tokenizer.tokenize("some string. another one").collect();

(haven't checked that the regexp is correct) so I'm not sure we need a separate object for it. Maybe just indicating the appropriate regexp for sentence tokenization could be enough?

For the sentence boundaries from the unicode_segmentation crate, yes that would be great if you are interested to look into it! I would also be interested to know how it compares to the Spacy tokenizer that uses a language model.

joshlk commented 4 years ago

I have just done a comparison between the different methods: splitting based on punctuation, Unicode segmentation, NLTK's Punkt model and spaCy. I used the Brown corpus as the benchmarking dataset.

Here are my results:

Method Precision Recall F1
Punctuation spliter 0.896 0.915 0.906
Unicode segmentation 0.938 0.912 0.925
NLTK Punkt 0.907 0.875 0.891
spaCy 0.924 0.908 0.916

Jupyter notebook with full analysis

Interestingly each model scores very similarly. Presumably because most sentences are quite easy (with a full stop and then a space) and you get very few which are more difficult (for example quotes, colons etc.). The Unicode segmentation surprisingly has the best score (F1) and has the added benefit that it's language independent (as previously suggested by @jbowles in #52).

I will do a PR on incorporating UnicodeSegmentation in the coming days.

rth commented 4 years ago

Thanks! Sounds great. It's interesting indeed that Unicode segmentation is competitive even compared to spacy, and I imagine it's much faster.

joshlk commented 4 years ago

PR #66 implements the thin wrapper around the Unicode sentence segmentation.

Regarding the "simple punctuation splitter": using a regex like [^\.!\?] doesn't work as you would loose the punctuation at the end of each sentence. I also tried (.*?[\.\?!]\s?) but here you would loose the trailing sentence if it didn't include a punctuation. For example:

Input = ["Here is one. Here is another! This trailing text is one more"]
Desired Output = ["Here is one.", "Here is another!", "This trailing text is one more"]

I don't think it's possible with Regex. Do you have any ideas?

Another tactic would be to create a itterator similar to how to spaCy and what I did in the Jupyter notebook. For example:


def split_on_punct(doc: str):
    """ Split document by sentences using punctuation ".", "!", "?". """
    punct_set = {'.', '!', '?'}

    start = 0
    seen_period = False

    for i, token in enumerate(doc):        
        is_punct = token in punct_set
        if seen_period and not is_punct:
            if re.match('\s', token):
                yield doc[start : i+1]
                start = i+1
            else:
                yield doc[start : i]
                start = i
            seen_period = False
        elif is_punct:
            seen_period = True
    if start < len(doc):
        yield doc[start : len(doc)]
joshlk commented 4 years ago

FYI I'm going to look into implementing the "simple punctuation splitter" using a Rust iterator.

rth commented 4 years ago

FYI I'm going to look into implementing the "simple punctuation splitter" using a Rust iterator.

Thanks, that would be great!

Regarding the "simple punctuation splitter": using a regex like [^.!\?] doesn't work as you would loose the punctuation at the end of each sentence.

I think using the regexp crate would still work but using split instead of find_iter to avoid the issue of the last sentence. Though I agree you would have to do a separate tokenizer for it.

rth commented 4 years ago

Closing as resolved in https://github.com/rth/vtext/pull/66 and https://github.com/rth/vtext/pull/70, thanks again @joshlk !