Open thomas-endres-tng opened 1 year ago
Hello !
Same problem for me. I also assumed 30 fps and modify a bit the function from AVCT to parse the phonemes.
The phoneme parsing does work well but as @thomas-endres-tng mentionned it the phonemes are very different. I use pretty much the same code. I also tried a code a bit different, doing segmentation first and processing each segement then with full_utt=False but anyway it was not as good as AVCT.
I tried different versions of the model language, included the most updated of course, and Im unable to have the same words nor the same phonemes. For word recognition on the AdamSchiff.wav of AVCT, the result I get still miss some words and "Putin" is always replaced by "proven" whereas in AVCT json file it appears correctly.
Regarding the fact that I planned to use styleTALK for my internship project, any help would be much appreciated !
Thank you very much :)
Unfortunately your reference concerning phonemes does not provide a reference other than the link to CMU Sphinx.
I did a bit of research and ended up with the following code:
I'm using the phindex.json file from https://github.com/FuxiVirtualHuman/AAAI22-one-shot-talking-face/blob/main/phindex.json and a ASSUMED_FRAME_RATE of 30 (this seems to match the number of phonemes you have in the samples rather than 25 as referenced in the papers).
However my phonemes look a lot different as compared to your samples for the sample wave files. What am I doing wrong?