xinjli / allosaurus

Allosaurus is a pretrained universal phone recognizer for more than 2000 languages
GNU General Public License v3.0
571 stars 88 forks source link

How the different phonemes sounds exactly? (Preparation for fine-tuning...) #40

Open kormoczi opened 3 years ago

kormoczi commented 3 years ago

Hi,

When I use allosaurus with the eng2102 model for an English wav file, the results looks quite good (although there is one issue, if there is no silence at the beginning of the wav file, some phonemes from the beginning of the speech will be missing - I am still testing this, maybe later I will start a separate issue on this topic).

But when I use the universal model for a Hungarian wav file, the results are not so good (of course, I know it is not a very well known language ;-)). So I would like to fine-tune the model. But for this, I need to create the text files about the phonemes of the sentences. As it is stated in the doc, the phones here should be restricted to the phone inventory of my target language. The phone inventory for the Hungarian language is the following: aː b bː c d dː d̠ d̪ d̪ː d̻ eː f fː h hː i iː j jː k kː l lː l̪ l̪ː m mː n nː n̪ n̪ː o oː p pː r rː r̪ r̪ː s sː s̪ s̻ t tː t̠ t̪ t̪ː t̻ u uː v vː w y yː z zː z̪ z̻ æ ø øː ɑ ɒ ɔ ɛ ɟ ɡ ɡː ɲ ɲː ɾ ʃ ʃː ʒ ʒː ʝ ʝː But for some phonemes I cannot recognize. Here is the explanation for the IPA signs for the Hungarian language: https://hu.wikipedia.org/wiki/IPA_magyar_nyelvre (unfortunately, it is in Hungarian, but the IPA signs are easy to find...) Can you help me to understand this, or give me a link to any document, describing these phonemes?

Thanks!

kormoczi commented 3 years ago

Two more things...

  1. I have tried to get the IPA phonemes of a Hungarian sentence with espeak-ng, but that gives different IPA characters as well. Maybe do you have some chart or conversion table for this? (I could add the additional characters to the language's inventory, but then I assume, there will be a lot of confusions...)
  2. In the text files (for fine-tuning), do we need to include spaces between the words, or any other special signs (e.g. ,.?!) Thanks
xinjli commented 3 years ago

For the phoneme, you do not need to use that exact phoneme inventory, most of them are a standard phoneme + some diacritics attaching to it. For the diacritics, you can find info, for example, here

You can use much simpler phoneme inventory if that satisfies your purpose. Actually, I think that the default phoneme inventory is hard to recognize.

I am not very familiar with espeak-ng's inventory. if it is x-sampa format, you can convert them using this file from panphon

In the fine-tuning, you should only include phonemes separated by space, do not use other special signs as they might be interpreted as phonemes.

kormoczi commented 3 years ago

Thanks for the answer, @xinjli, I have started to check those links, that you have mentioned.

I would like to use a simple phoneme inventory, of course, if it will be possible, but still I have a lot of question regarding the actual phoneme inventory (I think I can understand the diacritics, so that part is not a question).

Let me give you one example! The Hungarian word: cica (meaning: cat), the IPA "translation" - I think - should looks like the following: t͡s i t͡s ɒ translate_tts_hu_cica.zip At the moment the following command: python3 -m allosaurus.run --lang hun -i translate_tts_hu_cica.wav gives the following result: t i z ɒ Which is not good, but because t͡s is not in the inventory, more or less understandable... So lets modify the Hungarian inventory - I went through the process (allosaurus.bin.write_phone, add one new line for t͡s, allosaurus.bin.update_phone, I even checked the inventory), but the result is still: t i z ɒ. The really interesting part is, that if I do not specify the language, and just run the following command: python3 -m allosaurus.run -i translate_tts_hu_cica.wav the result (very surprisingly for me) will be: tɕ i tʂ ɒ, which is still not perfect, but much-much closer to the correct result.

And I have even tried to use the topk parameter, but even with topk=5. t͡s does not come out in the result... Do you have any suggestion? Am I doing something wrong?

(By the way, I have checked Phoible, and according to [https://phoible.org/languages/hung1274], all the different inventories for Hungarian has this ts phoneme (either t͡s, or ts, or t̪s̪).)

Regarding the fine-tuning... You mentioned I should not use special signs... Does it mean I should put for example 'z' into the text file, and not 'zː', 'z̪' or 'z̻' ?

Thank you and best regards!

xinjli commented 3 years ago

it might be the likelihood is tɕ >= t >= t͡s. This is typically caused by the unbalanced training set when I trained the model. You might want to suppress t or even delete it from your inventory if you do not want it. You can check the prior customization part in the README, it allows you to suppress the phones and boost other phones.

For the special signs, you can use 'zː', 'z̪' or 'z̻' as long as they are valid IPA.

kormoczi commented 3 years ago

Thanks for the explanation. At first, I did not wanted to touch the probabilities, as I do not know how it might affect other words...

So I will prepare the datasets for model fine-tuning. I have read in the README, that the audio files should be shorter than 10 seconds. But can you tell me, which one is better, to have only one word in these audio files, or better to have complete (short) sentences? And is there a need for a short silence at the beginning and the end of the audio files or not?

And may I ask, what kind of dataset are you using for the training? Is it contains samples from all the languages? If there are Hungarian samples in it, is it possible to check them?

Thanks!

xinjli commented 3 years ago

I think both styles are possible (one word per file, short sentence per file), it depends on your final application, you can use whatever you think appropriate. There does not need to contain silence at the beginning for training.

About the dataset other than English, it was mainly from a corpus collection called Babel dataset, its telephone conversation corpus. You can see the list of corpus from the linked paper. The model available here is using the exact same corpus set, but it is very similar. There were no Hungarian samples when I trained it.

kormoczi commented 3 years ago

I have started the fine-tuning... I have customized the phoneme inventory, and prepared the train and the validate dataset. The audio features looks fine (until now), but I have a problem with the text features. In my dataset, there are texts containing long consonants, like "akkor" (IPA translation: ɑ kː o r). The phoneme k is in the inventory, kː is not, of course, because it is not a different phoneme, just a long version. But the text feature script gives an AssertionError because of this. (And this is the same for the other long consonant as well.) The long vowels works fine (until now), because they are different phonemes in the IPA alphabet as well (like o and oː).

So what should I do? Do I have to put the long consonants into the phoneme inventory as well, or something else? Thanks!

xinjli commented 3 years ago

I might be wrong but as far as I know, only vowels can have this "long version", in your case, k itself is a very short consonant probably should not have a long version, it seems more reasonable to be something like k o: r.

If you still want to distinguish them, you can treat them as two different phonemes (o, o:) and train it

dmort27 commented 3 years ago

This is not correct. There are two ways of transcribing a geminate (or long) consonant: [kk] or [kː]. The first is ambiguous, since it can represent a sequence of two [k]s or a long counterpart of [k].

xinjli commented 3 years ago

so maybe we can decompose [k:] into [k] [k] in this case?

kormoczi commented 3 years ago

I don't think we can decompose [k:] into [k] [k]. Anyhow, long consonants are common in Hungarian, so I will try to us two different phonemes for the short and the long versions, and will check the result...