jekyll / classifier-reborn

A general classifier module to allow Bayesian and other types of classifications. A fork of cardmagic/classifier.
https://jekyll.github.io/classifier-reborn/
GNU Lesser General Public License v2.1
551 stars 109 forks source link

Ability to specify custom tokenizer #131

Open ibnesayeed opened 7 years ago

ibnesayeed commented 7 years ago

Currently, the following code is used to split the document in tokens/words for training and classification.

str.gsub(/[^\p{WORD}\s]/, '').downcase.split

This covers general case, but there could be situations where the user might want to customize the way document is split into words. For example, tokenizing Japanese text could be a whole different thing. Another situation where a custom tokenizer is needed when the user wants to train the model on N-grams (for example bi-grams such as New York). Splitting New and York from New York would mean New will be removed if it is present in stopwords. Similarly, to be or not to be is another popular example of a significant phrase fully made of common stopwords.. N-grams often play significant role in contextualizing a document and help improve the accuracy of the model in special situations. In many languages (Arabic, Persian, Urdu etc. to name a few) two or more words are combined (they are still separated by space, only put together) to form various linguistic constructs. This could be important if one wants to know who is the author of relatively small piece of text such as those posted on forums.

It would be nice if we can pass a Lambda as a tokenizer at the time of classifier initialization or some other more expressive means to tell the system how split the text.

Ch4s3 commented 7 years ago

I was thinking about adding ngram support as well. I want to do this by abstracting out tokenizing to a separate public api that can get either called by the classifier, or passed in. I'm not sure which approach would be better.

ibnesayeed commented 7 years ago

Would the dependency injection be a good idea where we create a an instance of the tokenizer and then pass it during the initialization of the classifier, the way we do for the storage backend support.

ibnesayeed commented 7 years ago

In the first post what i described was n-gram based on words, which are also called shingles. However, one can also use letter-based n-grams that are often produce good results while putting a finite upper bound on total memory used (as the maximum possible number of keys would be the number of possible letters raised to the power length of the n-gram), and could be ideal for large collection training.

Ch4s3 commented 7 years ago

Yeah, I think dependency injection is the way to go here.

piroor commented 7 years ago

I've opened #161 but it should be resolved as a part of this tokenizer issue... Sorry I didn't research this before I open it.

Ch4s3 commented 7 years ago

@piroor Thanks for hopping in!