Closed KwekuYamoah closed 2 years ago
Hi @KwekuYamoah , thanks for your attention. You need to preprocess VCTK first following README.md to get speaker embeddings. But I also shared pre-extracted speaker embeddings at here for the user who want to generate speech without such burden, so please enjoy my projects with them!
Thank you very much for your response. It does solve my problem. Kudos
Thank you very much for your response. It does solve my problem. Kudos
sorry to bother you, i met some problems in VCTK,shall i get the preprocessed_data of VCTK and i can check, please, thaks very much , 1215544940@qq.com
Hello, thank you very much for your brilliant open-source project. I have been able to do single and batch generations using the LJSpeech dataset. However, when I try to replicate the results for the VCTK dataset, it fails.
I run the following command,
!python3 synthesize.py --text "Hello World" --model naive --restore_step 300000 --mode single --dataset VCTK
I obtain the following output:
I tried to investigate further and discovered that the specific speaker embedding folder and file did not exist in my directory. Any pointer to how I can solve the issue will be appreciated.