However, there are many meaningful terms shorter than 3 characters in Japanese text, for example: "真" ("true"), "偽" ("false"), "信頼" ("trust"), "弟" ("younger brother"), and others. I'm using the MeCab as the tokenizer to extract verbs and nouns based on MeCab's dictionary, and most meaningful tokens are dropped by the hardcoded minimum length of terms.
So, I propose something new parameter to change the minimum length of acceptable terms, like:
def word_hash_for_words(words, language = 'en', enable_stemmer = true, minimum_word_length = 3)
d = Hash.new(0)
words.each do |word|
next unless word.length >= minimum_word_length && !STOPWORDS[language].include?(word)
(Of course we need to update ClassifierRebord::Bayes#initialize and others also.)
ClassifierReborn::Hasher::word_hash_for_words
always rejects terms shorter than 3 characters. The minimum length is hardcoded at: https://github.com/jekyll/classifier-reborn/blob/master/lib/classifier-reborn/extensions/hasher.rb#L27However, there are many meaningful terms shorter than 3 characters in Japanese text, for example: "真" ("true"), "偽" ("false"), "信頼" ("trust"), "弟" ("younger brother"), and others. I'm using the MeCab as the tokenizer to extract verbs and nouns based on MeCab's dictionary, and most meaningful tokens are dropped by the hardcoded minimum length of terms.
So, I propose something new parameter to change the minimum length of acceptable terms, like:
(Of course we need to update
ClassifierRebord::Bayes#initialize
and others also.)How about this?