-
A typical example where stress is important in Japanese is 橋 (bridge, read as はし, hashi) vs 箸 (chopsticks, read also as はし, hashi). The former (橋) has stress in the second syllable, where the latter (…
-
Hi,
I would like to use the pretrained acoustic model for English but use it in combination with a new in-domain language model, for which I have to generate pronunciations.
I am used to the Kal…
-
Instead of language specific voice, we could share all voice for many language with IPA specification. We can translate word from many language into string of IPA and let voice synthesis read out from…
-
This has been a long-standing problem that no one needs (with all the blame on my initial decision, in the context of Phon 1.0).
Our use of the IPA 'g' creates issues with virtually everyone, as t…
yrose updated
2 years ago
-
I have 2 questions.
I think it's quite odd that Italian, Russian and Hebrew dictionaries be missing here. Did you begin work on those and have existing elements I could use to create them?
I ha…
-
Thanks for the amazing repository. I realized that your code gives error on a lot of cases when I want to convert from IPA to ARPAbet. For example this one from dictionary abuse:
```
from arpabeta…
-
Describing the pronunciation of words is usually done using the International Phonetic Alphabet (IPA) which uses Unicode characters.
However, this package outputs ASCII character and there exists mu…
-
Hi,
what about the support for a different char set in other languages? Since the current dictionary is the CMU dictionary, to get it working to a new language, I suppose one would need the top commo…
-
Hello!
First off, thank you so much for creating this IPA transcription app. I am very new to IPA. I have used your online IPA converter for Latin and now German and it has been of immense help to…
-
https://elevenlabs.io/docs/api-reference/how-to-use-pronunciation-dictionaries
"Alias tags are supported by all models. Phoneme tags only work with the models eleven_turbo_v2 and eleven_monolingual…