kakaobrain / g2pm

A Neural Grapheme-to-Phoneme Conversion Package for Mandarin Chinese Based on a New Open Benchmark Dataset
Apache License 2.0
336 stars 73 forks source link

Why the count of polys in cedict is larger then that in corpus #5

Open JohnHerry opened 4 years ago

JohnHerry commented 4 years ago

Hi, I found that the count of poly chars in corpus is 623, while count of poly chars in cedict is over 700, what is the reason? I mean, when we do prediction, the poly in sentense may be not in the set of 623 polys, but in the set of 700+ polys, Then How will the model predict its Pinyin?

seanie12 commented 4 years ago

Hi, as mentioned in the previous issue, our dataset does not cover all possible Chinese polyphonic characters. We collect Chinese sentences from wikipedia and label it, so some of polyphonic characters are missing in our data. The final output of our model is probability distribution of all possible pinyins. But as you point out, the model never see some of polyphonic character during training. So it is highly likely that model fails to predict correct pinyin for such cases. But I believe such cases are really rare.

JohnHerry commented 4 years ago

As we tested, the g2pM is not good enough for use in production. Maybe more samples need for CPP dataset.