AI4Bharat / Indic-TTS

Text-to-Speech for languages of India
MIT License
150 stars 35 forks source link

Model Loading & Inference Time #15

Closed sreeshank18 closed 1 year ago

sreeshank18 commented 1 year ago

Hello Team,

Recently, I was able to setup IndicTTS in our A100 GPU instance. What I observed was that the model loading time is ~12 minutes when I used the flag use_cuda=True, which is quite huge.

When I disabled GPU with use_cuda=False model is loading very fast 1592.0169 ms but inference time is very high.

Looks like I am missing out on something can anyone help me find out what I am doing wrong to fix this time issue.

Thanks

sreeshank18 commented 1 year ago

Problem is with GPU