Closed AK391 closed 1 year ago
I'm not familiar with Gradio. I did try it for StyleTTS but had no success. I would take a look at it later when I get time, but if anyone is interested in making a demo for now feel free to contribute!
Someone is already working on it: https://github.com/yl4579/StyleTTS2/pull/53, and we are figuring out some details of it. I will let you know when it is ready.
Hi @AK391. I’ve released a Gradio demo here with voice cloning, multi-speaker support, and LJSpeech support.
@fakerybakery I think for the default voices, it would be great if you could find all the audio samples in the training data and compute the styles of each sample and take the average, then save it as the speaker embedding. This is probably more efficient than computing the style every time it is run, and also more accurate reflection of the speaker.
Yes, you’re probably right. No wonder starting the demo took so long each time! Thank you, I’ll push a fix tomorrow :)
Hi, someone asked here if I would release a local Gradio GUI to run (the comment was later deleted for some reason, but it was still in my inbox).
I am planning to eventually release it and perhaps make a PR to the main repository, but the code quality is currently pretty... low. I'm going to clean it up a bit and then try to release it.
Thanks to @AK391 for posting this solution on X/Twitter! Just realized you can run any Hugging Face space on Docker.
docker run -it -p 7860:7860 --platform=linux/amd64 --gpus all \
registry.hf.space/styletts2-styletts2:latest python app.py
A few more features that could be added:
max_length
of BERT encoder is 512, so I think a better way of checking this limit is first phonemize the input and then use len()
on the phoneimzed texts and make sure it is less than 512. Thanks again for your help in making the demo!
Hi, I can try to implement this. 1 and 2 seem doable, but 3 seems a bit harder. I'll look into this later today! Thanks for the suggestions!
Can you please remove the "Access code" in the "Long Text"? It is a problem when the docker is run locally.
Ok, I'll remove the long text feature in a couple minutes or add a character limit
Hi @yl4579, a couple things:
len
, do you mean just get the length of phonemes? Or tokens?Hi, someone asked here if I would release a local Gradio GUI to run (the comment was later deleted for some reason, but it was still in my inbox).
I am planning to eventually release it and perhaps make a PR to the main repository, but the code quality is currently pretty... low. I'm going to clean it up a bit and then try to release it.
@fakerybakery thanks a lot for your reply , I'm looking forward for the local version ,I tested the huggingface demo and it looks awesome !
@fakerybakery
.to('cuda')
.len
should be fine too. I am planning to eventually release it and perhaps make a PR to the main repository, but the code quality is currently pretty... low. I'm going to clean it up a bit and then try to release it.
Would you like to start with making a local copy of the current HF demo and then iterate over it to improve it? @fakerybakery
Yeah, I'll start doing that. However I'm using macOS and can't figure out how to install espeak-ng for phonemizer (I tried MacPorts but it didn't work - maybe I'll develop it on a VM)
HI, congrats on StyleTT2, would be great to setup a gradio demo for it on Hugging Face, you can see the guide to get started here: https://huggingface.co/docs/hub/spaces-sdks-gradio and here is a recent example: https://huggingface.co/spaces/coqui/xtts, @yvrjsharma