coqui-ai / TTS

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
http://coqui.ai
Mozilla Public License 2.0
35.26k stars 4.3k forks source link

Unable to download TTS models in docker environment #3546

Closed avrellaku closed 8 months ago

avrellaku commented 9 months ago

Describe the bug

Basically, in my local environment (Windows) I can download and use all models. Hovewer, when I build my project in docker, it fails to download the models with error:

  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.11/site-packages/TTS/api.py", line 129, in download_model_by_name
    model_path, config_path, model_item = self.manager.download_model(model_name)
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/TTS/utils/manage.py", line 411, in download_model
    self.create_dir_and_download_model(model_name, model_item, output_path)
  File "/usr/local/lib/python3.11/site-packages/TTS/utils/manage.py", line 352, in create_dir_and_download_model
    raise e
  File "/usr/local/lib/python3.11/site-packages/TTS/utils/manage.py", line 347, in create_dir_and_download_model
    self._download_hf_model(model_item, output_path)
  File "/usr/local/lib/python3.11/site-packages/TTS/utils/manage.py", line 237, in _download_hf_model
    self._download_model_files(model_item["hf_url"], output_path, self.progress_bar)
  File "/usr/local/lib/python3.11/site-packages/TTS/utils/manage.py", line 609, in _download_model_files
    for data in r.iter_content(block_size):
  File "/usr/local/lib/python3.11/site-packages/requests/models.py", line 818, in generate
    raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(1345799194 bytes read, 522129924 more expected)', IncompleteRead(1345799194 bytes read, 522129924 more expected))

To Reproduce

Create a simple docker file:

FROM python:3.11 AS base
RUN pip install TTS
RUN yes | python -c "from TTS.api import TTS; TTS().download_model_by_name(model_name='tts_models/multilingual/multi-dataset/xtts_v2')"

Build and run it. You will notice the error when downloading the model.

Expected behavior

It should work as locally. Model should be downloaded.

Logs

No response

Environment

{
    "CUDA": {
        "GPU": [],
        "available": false,
        "version": "12.1"
    },
    "Packages": {
        "PyTorch_debug": false,
        "PyTorch_version": "2.1.2+cu121",
        "TTS": "0.22.0",
        "numpy": "1.26.3"
    },
    "System": {
        "OS": "Linux",
        "architecture": [
            "64bit",
            "ELF"
        ],
        "processor": "",
        "python": "3.11.7",
        "version": "#1 SMP Thu Oct 5 21:02:42 UTC 2023"
    }
}

Additional context

No response

avrellaku commented 9 months ago

FYI: The issues seems to be solved if stream=True is removed from https://github.com/coqui-ai/TTS/blob/dev/TTS/utils/manage.py#L600. But don't know if it affects other parts...

stale[bot] commented 8 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.