-
Thanks for the codebase. Good work!
In the paper, Speech is split into -- timbre (using speaker embedding), pitch, rhythm, content. If I am not wrong, the accent information of the speaker is not c…
-
Hi,
I am attempting zero-shot voice conversion by using only a few audio sentences of a target speaker. I am training a speaker embedding for this speaker using the make_spect.py and make_metadata.py…
-
Firstly, I really appreciate for this repo. It helped me a lot for learning about TTS.
But I think I met some problems on inference stage.
I trained the model with LibriTTS with adjusted configs…
-
thanks for such a great work, in DNS challenge, personalized speech enhancement is gradually replaced the non-personalized speech enhancement, this is a challenge and interesting task, since it need t…
-
Hi @caizexin ,
I have been trying to implement your work on my own dataset. I am trying to run the speaker embedding network in the[ deep_speaker](https://github.com/caizexin/tf_multispeakerTTS_fc/…
-
I followed the provided instructions.
I turned the demo_part3 file into a normal python file to test the code:
```python
# Import necessary libraries
import os
import sys
import torch
fro…
-
Adding here some implementation improvements that I need to do courtesy of comments from @r9y9
- [XX] Change F0 to log-F0 (and continuous)
- [] Use original speaker embedding during training,
- …
-
Hello,
I recently came across your experiments on the so-vits-project [link](https://github.com/voicepaw/so-vits-svc-fork/discussions/282#discussion-5067096). Since I wanted a way to generate uniq…
-
**Can anyone help me please**
(venv) C:\Users\Dragn\Documents\WeeaBlind-master\WeeaBlind-master\venv>python weeablind.py
C:\Users\Dragn\Documents\WeeaBlind-master\WeeaBlind-master\venv\output\sa…
-
Hello
I'm tiring to make new model on VCTK dateset first I'm make generate speaker embedding by using code python generate_embeddings.py, Now Traing model by using file train.py but i have proble…