neonbjb / tortoise-tts

A multi-voice TTS system trained with an emphasis on quality
Apache License 2.0
13k stars 1.79k forks source link

Batch Inference? #694

Open addytheyoung opened 10 months ago

addytheyoung commented 10 months ago

So we're having issues inferencing efficiently at scale, and of course we're processing the audio parts one by one as is default for inference, but is there any support for batch inference to speed things up, in the same way vLLM or other LLM serving libraries work?

This is a solved problem for LLMs, or there's lots of inference options at least, so just wondering if the same exists for TTS? Thanks.

lobsterchan27 commented 4 months ago

did you find any solution? im interested as well