Open rsong0606 opened 4 months ago
I also tried. The inference is very slow. Anyone can have a look for this issue?
Which hardware are you using? can you please provide more env setup?
Checkout their guide on speeding up inference: https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md
Hey Team, good work overall!
I am using this sample code, played a bit with different descriptions. Overall, this is great. However, it took 9 seconds to generate a 20 tokens text.
Talia's voice generation time: 8.13 seconds