Ekphrasis is a text processing tool, geared towards text from social networks, such as Twitter or Facebook. Ekphrasis performs tokenization, word normalization, word segmentation (for splitting hashtags) and spell correction, using word statistics from 2 big corpora (english Wikipedia, twitter - 330mil english tweets).
MIT License
661
stars
90
forks
source link
Do you exposure your underlying language model for uni/bigrams? #18
One of the tools I wish I have had is a basic statistical language model (relative frequency) of various unigrams, bigrams, and trigrams. When extracting keywords from text, one of the failures of TF-IDF is that the relative scores are not calibrated so that unigram and bigram scores can be compared with one another. There also is the trouble of needing to have document and token frequencies. Instead, I normalize the TF/TF-IDF scores against the English corpus statistics, which you have within your models. Usually I use the unwieldy Google NGrams corpus, but yours is succinct and quite helpful. Is this easily accessed?
This library is really superb.
One of the tools I wish I have had is a basic statistical language model (relative frequency) of various unigrams, bigrams, and trigrams. When extracting keywords from text, one of the failures of TF-IDF is that the relative scores are not calibrated so that unigram and bigram scores can be compared with one another. There also is the trouble of needing to have document and token frequencies. Instead, I normalize the TF/TF-IDF scores against the English corpus statistics, which you have within your models. Usually I use the unwieldy Google NGrams corpus, but yours is succinct and quite helpful. Is this easily accessed?
Thanks!