503
Hugging Face is in maintenance
Open asadullahnaeem opened 9 months ago
I am having the same issue running it locally on a single machine.
import torch
from TTS.api import TTS
# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
# List available πΈTTS models
print(TTS().list_models())
# Init TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
# Run TTS
# β Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language
# Text to speech list of amplitude values as output
wav = tts.tts(text="Hello world!", speaker_wav="~/Downloads/uk_female_high_voice.wav", language="en")
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker_wav="~/Downloads/uk_female_high_voice.wav", language="en", file_path="output.wav")
<TTS.utils.manage.ModelManager object at 0x7fbd95c5bee0>
> tts_models/multilingual/multi-dataset/xtts_v2 has been updated, clearing model cache...
> Downloading model to /home/chrish/.local/share/tts/tts_models--multilingual--multi-dataset--xtts_v2
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.05k/3.05k [00:00<00:00, 8.33kiB/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.05k/3.05k [00:00<00:00, 8.47kiB/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.05k/3.05k [00:00<00:00, 8.19kiB/s]
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.05k/3.05k [00:00<00:00, 7.74kiB/s]
> Model's license - CPML
> Check https://coqui.ai/cpml.txt for more info.
Traceback (most recent call last):
File "/home/chrish/workspace/talkie/example.py", line 11, in <module>
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
File "/home/chrish/miniconda3/envs/talkie/lib/python3.10/site-packages/TTS/api.py", line 74, in __init__
self.load_tts_model_by_name(model_name, gpu)
File "/home/chrish/miniconda3/envs/talkie/lib/python3.10/site-packages/TTS/api.py", line 177, in load_tts_model_by_name
self.synthesizer = Synthesizer(
File "/home/chrish/miniconda3/envs/talkie/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 109, in __init__
self._load_tts_from_dir(model_dir, use_cuda)
File "/home/chrish/miniconda3/envs/talkie/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 161, in _load_tts_from_dir
config = load_config(os.path.join(model_dir, "config.json"))
File "/home/chrish/miniconda3/envs/talkie/lib/python3.10/site-packages/TTS/config/__init__.py", line 92, in load_config
data = read_json_with_comments(config_path)
File "/home/chrish/miniconda3/envs/talkie/lib/python3.10/site-packages/TTS/config/__init__.py", line 21, in read_json_with_comments
return json.loads(input_str)
File "/home/chrish/miniconda3/envs/talkie/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/home/chrish/miniconda3/envs/talkie/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/home/chrish/miniconda3/envs/talkie/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Wait, I just realized that HuggingFace is down, so the model failed to download. In the folder ~/.local/share/tts/tts_models--multilingual--multi-dataset--xtts_v2
, the file config.json
reads like the HTML file received because HuggingFace returned a 503:
Hugging Face is in maintenance
@asadullahnaeem - seeing as we got a similar error, is it possible that when you run multiple workers they encounter an issue because they are being disconnected from the network? Maybe you're being blocked from creating multiple connections. Meanwhile, when you run a single worker, ti downloads the model fine and moves on.
Files are already downloaded.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
Describe the bug
I am using
tts_models/multilingual/multi-dataset/your_tts
with multiplegunicorn
workers but most of the time I get the following Error traceback. The code sometimes run without any issue. Seems like a bug when multiple workers are used because on single worker it works fine.Fix Tried
I thought that because of some thread file lock, when multiple workers try to access same
config.json
file, this issue occurs. So added delay in loading the model (code given) but didn't work.To Reproduce
Expected behavior
No response
Logs
Environment
Additional context
No response