coqui-ai / TTS

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
http://coqui.ai
Mozilla Public License 2.0
35.29k stars 4.31k forks source link

[Bug] Cannot work with CPU #2989

Closed Leppan closed 1 year ago

Leppan commented 1 year ago

Describe the bug

Hello! I have an RX-580 graphics card on my computer. However, Torch or TS refuses to work with it for a reason I don't understand. He says that I use only the processor. Help me fix it so that the video card or processor works completely

To Reproduce

import torch from TTS.api import TTS device = "cuda" if torch.cuda.is_available() else "cpu" model_name = TTS().list_models()[0] tts = TTS(model_name).to(device) wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0]) tts.tts_to_file(text="text", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")

Expected behavior

No response

Logs

No response

Environment

- TTS Last version

Additional context

No API token found for 🐸Coqui Studio voices - https://coqui.ai Visit 🔗https://app.coqui.ai/account to get one. Set it as an environment variable export COQUI_STUDIO_TOKEN=<token>

tts_models/multilingual/multi-dataset/xtts_v1 is already downloaded. Using model: xtts Traceback (most recent call last): File "D:\ARGUS\tggptbot\TGGPTMY\text-to speech\testfile-load-tts.py", line 10, in tts = TTS(model_name).to(device) ^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\TTS\api.py", line 81, in init self.load_tts_model_by_name(model_name, gpu) File "C:\Program Files\Python311\Lib\site-packages\TTS\api.py", line 185, in load_tts_model_by_name self.synthesizer = Synthesizer( ^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\TTS\utils\synthesizer.py", line 109, in init self._load_tts_from_dir(model_dir, use_cuda) File "C:\Program Files\Python311\Lib\site-packages\TTS\utils\synthesizer.py", line 164, in _load_tts_from_dir self.tts_model.load_checkpoint(config, checkpoint_dir=model_dir, eval=True) File "C:\Program Files\Python311\Lib\site-packages\TTS\tts\models\xtts.py", line 645, in load_checkpoint self.load_state_dict(load_fsspec(model_path)["model"], strict=strict) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\TTS\utils\io.py", line 86, in load_fsspec return torch.load(f, map_location=map_location, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\torch\serialization.py", line 809, in load return _load(opened_zipfile, map_location, pickle_module, pickle_load_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\torch\serialization.py", line 1172, in _load result = unpickler.load() ^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\torch\serialization.py", line 1142, in persistent_load typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\torch\serialization.py", line 1116, in load_tensor wrap_storage=restore_location(storage, location), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\torch\serialization.py", line 217, in default_restore_location result = fn(storage, location) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize device = validate_cuda_device(location) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Program Files\Python311\Lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Process finished with exit code 1

erogol commented 1 year ago

I can't reproduce this. Tried this

import torch
from TTS.api import TTS
model_name = TTS().list_models()[0]
tts = TTS(model_name).to("cpu")
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
                  file_path="output.wav",
                  speaker_wav="SOME/VOICE.WAV",
                   language="en")
Leppan commented 1 year ago

my code:

import torch
from TTS.api import TTS
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = TTS().list_models()[0]
tts = TTS(model_name).to(device)
wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0])
tts.tts_to_file(text="text", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")

I got the error again. Whe i tried use tried use your code. RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

erogol commented 1 year ago
tts.tts_to_file(text="text", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")

is not a valid call for the model you set.

Leppan commented 1 year ago
tts.tts_to_file(text="text", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")

is not a valid call for the model you set.

Okay what should I use so that it doesn't throw a CPU-related error? how do I set up a config of some kind or something else?

Lenos500 commented 1 year ago

I can't reproduce this. Tried this

import torch
from TTS.api import TTS
model_name = TTS().list_models()[0]
tts = TTS(model_name).to("cpu")
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
                  file_path="output.wav",
                  speaker_wav="SOME/VOICE.WAV",
                   language="en")

Will this work locally on CPU without errors? Have you tried it?

Leppan commented 1 year ago

I can't reproduce this. Tried this

import torch
from TTS.api import TTS
model_name = TTS().list_models()[0]
tts = TTS(model_name).to("cpu")
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
                  file_path="output.wav",
                  speaker_wav="SOME/VOICE.WAV",
                   language="en")

Will this work locally on CPU without errors? Have you tried it?

It doesn't work. tried it

Lenos500 commented 1 year ago

I can't reproduce this. Tried this

import torch
from TTS.api import TTS
model_name = TTS().list_models()[0]
tts = TTS(model_name).to("cpu")
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
                  file_path="output.wav",
                  speaker_wav="SOME/VOICE.WAV",
                   language="en")

Will this work locally on CPU without errors? Have you tried it?

It doesn't work. tried it

What do you think we should be doing now regarding this issue?

Leppan commented 1 year ago

I can't reproduce this. Tried this

import torch
from TTS.api import TTS
model_name = TTS().list_models()[0]
tts = TTS(model_name).to("cpu")
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
                  file_path="output.wav",
                  speaker_wav="SOME/VOICE.WAV",
                   language="en")

Will this work locally on CPU without errors? Have you tried it?

It doesn't work. tried it

What do you think we should be doing now regarding this issue?

How would I know? I came here with a problem. none of you have offered even the slightest bit of a reasonable solution. I say that the error remains. I ask if there is a config for configuration somewhere. you're just ignoring it. Here the question is what should I do, not you.

Lenos500 commented 1 year ago

I can't reproduce this. Tried this

import torch
from TTS.api import TTS
model_name = TTS().list_models()[0]
tts = TTS(model_name).to("cpu")
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
                  file_path="output.wav",
                  speaker_wav="SOME/VOICE.WAV",
                   language="en")

Will this work locally on CPU without errors? Have you tried it?

It doesn't work. tried it

What do you think we should be doing now regarding this issue?

How would I know? I came here with a problem. none of you have offered even the slightest bit of a reasonable solution. I say that the error remains. I ask if there is a config for configuration somewhere. you're just ignoring it. Here the question is what should I do, not you.

Well I have the same problem and came for help, I have no idea.