-
hey, thank you so much for your great work. As I am trying to train the model on Libritts dataset following your tips https://github.com/sh-lee-prml/HierSpeechpp/issues/20#issuecomment-1870806287.
I …
-
Hi, thanks for the nice work!
I have implemented almost the same structure before
Coarse (12.5hz, single codebook) to fine (50 hz, four codebooks)
I have experienced a low reconstruction quality on…
-
-
hey,
thank you for sharing the training cure. I am training the model on Libritts 960 using exactly your preprocessing and training files. However my training curve doesn t show the same losses you g…
-
Hello. First of all, thank you very much for your great work !
Now I'm trying to finetune the hierarchical speech synthesizer on my own dataset. In my understanding, in your adversarial training proc…
-
Hi, thanks again for open-sourcing the models.
I have been training the model so far on Hindi dataset, but I have noticed there seems to be random abrupt pauses in the sentence on generation. Have at…
-
In the output of the text encoder on a custom dataset, here
https://github.com/sh-lee-prml/HierSpeechpp/blob/baeaf74c111ac5fcc088744b14bad8f5c8301c93/ttv_v1/t2w2v_transformer.py#L393
the value of `x…
-
Thank you for your excellent work! I'm currently working on fine-tuning the TTV model with a non-English dataset. I've downloaded the 'ttv_lt960_ckpt.pth' and attempted to load it using utils.load_che…
arsm updated
10 months ago
-
![image](https://github.com/sh-lee-prml/HierSpeechpp/assets/42952005/bf24ac35-0f7d-4e98-8a9f-c0b84e1800d3)
As the comment says, the differences between the two cleaners are that cleaners2 has punctua…
-
It's really good, looking forward to the Chinese model. When will the open source code allow us to train ourselves?