AI4Bharat / Indic-TTS

Text-to-Speech for languages of India
MIT License
130 stars 29 forks source link

GPU error #16

Open rohitdahiya1 opened 10 months ago

rohitdahiya1 commented 10 months ago

When i am running sample.py file on google colab with T4 GPU. it is loading the model in GPU correctly, but when i am doing inference using inference_from_text it is showing below error, RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)

i have tried many ways to bring both model and input text tensor on the same device, but it is giving me the same error again and again. but it was working fine when i used it for first few times. please help @GokulNC

Instincts03 commented 5 months ago

IN TTS/utils/synthesizer.py , line 376- vocoder_device = "cpu" , change it to vocoder_device = "cuda"

vrindamathur1428 commented 3 months ago

IN TTS/utils/synthesizer.py , line 376- vocoder_device = "cpu" , change it to vocoder_device = "cuda"

but if I'm using use_cuda=True then why do I need to do this ?

punyabrota commented 2 months ago

I am also having the same problem while running the sample.py file. changing the vcoder_device from "cpu" to "cuda" did not help either. any pointers or guidance please?

jerrinhaloocom commented 2 months ago

i am also trying to run it on GPU, in cpu no problem, if u get any idea or solutions, please comment here