CanCLID / canto-filter

粵文語料篩選器 Cantonese text filter
https://pypi.org/project/canto-filter/
MIT License
34 stars 3 forks source link

Thoughts on automating the addition of canto_unique characters and phrases #4

Open AlienKevin opened 1 year ago

AlienKevin commented 1 year ago

Because we don't have a reliable Cantonese tokenizer, all the phrases added to the classifier need to satisfy the following property, in addition to not being shared by Mandarin: Frequency of the phrase is very low in an untokenized Mandarin corpus. We don't want incorrect word boudaries to mess up our classification. For example, if the phrases like 全家下棋 are common in Mandarin, we don't want to add the phrase 家下 to canto_unique. The classifier will incorrectly estimate the word boundary in Mandarin texts and judge a Mandarin text to be Cantonese.

A way to automate the additions

Words.hk provides lists of Cantonese phrases and characters as well as their common variants. It also has labels on whether the phrase is a 書面語. We first exclude any 書面語 words from our candidates for canto_unique. Then, we can search each of the remaining characters or phrases in a large Mandarin corpus (e.g. Wudao Corpus 2.0 publushed a 200GB corpus for the public). If the character/phrase appears under a certain frequency, we add it to the canto_unique list. Alternatively, we can generate such a list of candidates and manually review them before adding them to the list.

Shortcomings of the current classification model

Since we don't have a Cantonese tokenizer, we have to rely on a coarser statistical estimation. It might result in some false positives (classified as Cantonese but is actually Mandarin) and some false negatives (the Cantonese word is disguised under 假借 or it's simply not on our list). Under these constraints, the best approach is to increase the number of canto_unique words that are very rare in Mandarin. Another approach is to add more mando_unique words that appear very rarely in Cantonese. This would require us to lookup the words in a large Cantonese corpus (e.g. LIHKG forum). However, since the written Cantonese corpuses I'm aware of are generally much smaller than their Mandarin counterparts and represent a certain age-range, I think the former approach relying on Mandarin corpuses would yield a more reliable classification.

laubonghaudoi commented 1 year ago

我覺得可行,不過我唔得閒做呢個分析,交畀你做?

AlienKevin commented 1 year ago

好嘅,我得閒睇下點做

laubonghaudoi commented 1 year ago

@AlienKevin 想問下有冇進展?

AlienKevin commented 1 year ago
@laubonghaudoi 呢排做緊粵普翻譯有用到呢個classifer,對data cleaning幫助好大,不過暫時唔得閒改進。我而家嘅諗法其實係用ayaka嘅cantonese-bart-base嚟finetune成個classifier,應該比手動寫rule要容易啲。我喺LIHKG數據度試過而家嘅classifier,結果有唔少係neutral,主要問題係識別唔出一啲共用漢字嘅粵語特殊用法,好似「片後半段多笑啊」,呢種句子好難用regex或者ngram嘅rule嚟識別。所以我而家係淨係取Cantonese嘅40%嚟訓練,不過覺得Neutral嘅47%大多數應該都係廣東話,只係從classifier嘅角度,而家implementation嘅Cantonese recall有少少高(好多Cantonese被歸為Neutral)。 Language Sentences Proportion
Cantonese 43370662 40%
Cantonese mixed with Mandarin 2216730 2%
Neutral 50767873 47%
Mandarin 6913635 6%
laubonghaudoi commented 1 year ago

你係想講 recall 有少少低?越多粵文錯分成中性,應該係越低 recall 至啱。

你講得冇錯,啲呢個分類器係齋靠字詞做判斷冇得靠句法識別嘅。但係如果係用 BERT 呢種重量級模型嚟做分類嘅話,噉唔係因果倒置?本身我哋整呢個分類器就係為咗訓練 BERT 之類嘅粵語模型,而家仲反過嚟用返個 BERT 嚟分類,個速度同成本就全部爆標喇。而家呢個分類器喺普通電腦度可以接近每秒一千句嘅,用BERT分類嘅話唔知慢幾多又貴幾多。

就好似README入面講嘅,呢隻分類器嘅目的係篩選粵文語料,唔係正確分類。所以係為咗 precision 犧牲 recall 嘅,佢追求嘅係「寧願漏咗啲粵語,都要保證篩出嚟嘅粵語都真係粵語而唔係溝咗官話」。

AlienKevin commented 1 year ago

係,你講嘅啱。BART嘅速度比較慢,唔適合做預處理。咁我之前提議嘅用普通話corpus做ngram frequency count可能要現實啲。

ayaka14732 commented 1 year ago

而家 BERT 都已經唔係重量級模型了 lol

ming0308uk commented 3 months ago

I finetuned a BERT model for this purpose.

https://huggingface.co/ming030890/chinese-langid?text=%E4%BF%82%E5%94%94%E4%BF%82%E5%8E%BB%E9%A3%9F%E9%A3%AF%EF%BC%9F

It doesn't support the label 'neutral', but one can technically check if the score is close to 0.5 instead.

Let me know if it works for you!