ftyers / commonvoice-utils

Linguistic processing for Common Voice
GNU Affero General Public License v3.0
51 stars 14 forks source link

Add a method of checking CJK #13

Open ftyers opened 3 years ago

ftyers commented 3 years ago

Perhaps something like PASS to basically return whatever was input and REPL for removing punctuation.

Another option would be something like CB for check Unicode Block.

wenjie-p commented 3 years ago

Hi Fran,

I just noticed this issue right now, but I think maybe I can help with this for Chinese.

Perhaps something like PASS to basically return whatever was input and REPL for removing punctuation.

If I understand correctly, this is used to separate the valid chars from punctuations. Generally, punctuations should not be considered for AM training for Chinese. But some punctuations like ?! are usually involved strong emotions, thus differ from declarative sentence ended with I think punctuations removal should be taken carefully and should take the transcripts into account.

Another option would be something like CB for check Unicode Block.

I think we can select the valid chars from punctuations for Chinese based on the hex number of unicode.

ftyers commented 3 years ago

For punctuation it would be interesting to see the effect of adding them in or not. For many acoustic models, I worry that the kind of information needed for predicting the final punctuation might be quite a long way away from where it needs to be predicted, e.g. maybe intonation difference is clear in the middle of the utterance, but the question mark needs to be predicted at the end.

On the other hand I think that this is an empirical question and could be determined by trying to train a model with and without and looking at the errors.

I think that the "check block" is a nicer example, it would allow us to exclude transcripts which include Latin characters for example. Also, for Chinese, are you mostly training byte-based models, or pinyin/phone-based?

wenjie-p commented 3 years ago

for Chinese, are you mostly training byte-based models, or pinyin/phone-based?

I think it depends. For hybrid system, a pronunciation lexicon is demand to map each character to pinyin most times; while E2E system is lexicon-free and we can adopt BPE as modeling unit. To be honest, my current research interest is not focus on Chinese ASR, but I think people would make their choice of modeling unit selection based on their demands, i.e. model/algorithm to further improve.