152334H / tortoise-tts-fast

Fast TorToiSe inference (5x or your money back!)
GNU Affero General Public License v3.0
755 stars 176 forks source link

CPU-only machines Mac Silicone M1 with longer texts #137

Open yin-ori opened 4 months ago

yin-ori commented 4 months ago

Hey all,

I've been trying to set up tortoise on a virtualenv (which is now working fine with python 3.11.1 - with 3.12.2 it was unable to build wheels, anyways..:) I am able to run do_tts.py and the code provided in tortoise_tts.ipynb. However, I would like to run it with longer texts, thus trying read.py and read_fast.py

In doing so, I get an error stating I don't have a GPU, so good so true, but I am not sure where to change the settings as stated

 raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

In which file in which line am I supposed to change that? I checked if the files read.py and read_fast.py contain torch.load, but they don't, so I am wondering where can I make the necessary changes? (I have not made changes to the above mentioned py files, so if you could help me in clarifying where to make the change in the original code, that would be much appreciated.)

Best