Closed Leppan closed 1 year ago
I can't reproduce this. Tried this
import torch
from TTS.api import TTS
model_name = TTS().list_models()[0]
tts = TTS(model_name).to("cpu")
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
file_path="output.wav",
speaker_wav="SOME/VOICE.WAV",
language="en")
my code:
import torch from TTS.api import TTS device = "cuda" if torch.cuda.is_available() else "cpu" model_name = TTS().list_models()[0] tts = TTS(model_name).to(device) wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0]) tts.tts_to_file(text="text", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
I got the error again. Whe i tried use tried use your code. RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
tts.tts_to_file(text="text", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
is not a valid call for the model you set.
tts.tts_to_file(text="text", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
is not a valid call for the model you set.
Okay what should I use so that it doesn't throw a CPU-related error? how do I set up a config of some kind or something else?
I can't reproduce this. Tried this
import torch from TTS.api import TTS model_name = TTS().list_models()[0] tts = TTS(model_name).to("cpu") tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.", file_path="output.wav", speaker_wav="SOME/VOICE.WAV", language="en")
Will this work locally on CPU without errors? Have you tried it?
I can't reproduce this. Tried this
import torch from TTS.api import TTS model_name = TTS().list_models()[0] tts = TTS(model_name).to("cpu") tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.", file_path="output.wav", speaker_wav="SOME/VOICE.WAV", language="en")
Will this work locally on CPU without errors? Have you tried it?
It doesn't work. tried it
I can't reproduce this. Tried this
import torch from TTS.api import TTS model_name = TTS().list_models()[0] tts = TTS(model_name).to("cpu") tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.", file_path="output.wav", speaker_wav="SOME/VOICE.WAV", language="en")
Will this work locally on CPU without errors? Have you tried it?
It doesn't work. tried it
What do you think we should be doing now regarding this issue?
I can't reproduce this. Tried this
import torch from TTS.api import TTS model_name = TTS().list_models()[0] tts = TTS(model_name).to("cpu") tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.", file_path="output.wav", speaker_wav="SOME/VOICE.WAV", language="en")
Will this work locally on CPU without errors? Have you tried it?
It doesn't work. tried it
What do you think we should be doing now regarding this issue?
How would I know? I came here with a problem. none of you have offered even the slightest bit of a reasonable solution. I say that the error remains. I ask if there is a config for configuration somewhere. you're just ignoring it. Here the question is what should I do, not you.
I can't reproduce this. Tried this
import torch from TTS.api import TTS model_name = TTS().list_models()[0] tts = TTS(model_name).to("cpu") tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.", file_path="output.wav", speaker_wav="SOME/VOICE.WAV", language="en")
Will this work locally on CPU without errors? Have you tried it?
It doesn't work. tried it
What do you think we should be doing now regarding this issue?
How would I know? I came here with a problem. none of you have offered even the slightest bit of a reasonable solution. I say that the error remains. I ask if there is a config for configuration somewhere. you're just ignoring it. Here the question is what should I do, not you.
Well I have the same problem and came for help, I have no idea.
Describe the bug
Hello! I have an RX-580 graphics card on my computer. However, Torch or TS refuses to work with it for a reason I don't understand. He says that I use only the processor. Help me fix it so that the video card or processor works completely
To Reproduce
import torch from TTS.api import TTS device = "cuda" if torch.cuda.is_available() else "cpu" model_name = TTS().list_models()[0] tts = TTS(model_name).to(device) wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0]) tts.tts_to_file(text="text", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
Expected behavior
No response
Logs
No response
Environment
Additional context
No API token found for 🐸Coqui Studio voices - https://coqui.ai Visit 🔗https://app.coqui.ai/account to get one. Set it as an environment variable
export COQUI_STUDIO_TOKEN=<token>
Process finished with exit code 1