erew123 / alltalk_tts

AllTalk is based on the Coqui TTS engine, similar to the Coqui_tts extension for Text generation webUI, however supports a variety of advanced features, such as a settings page, low VRAM support, DeepSpeed, narrator, model finetuning, custom models, wav file maintenance. It can also be used with 3rd Party software via JSON calls.
GNU Affero General Public License v3.0
873 stars 101 forks source link

Initial startup failed #110

Closed nathanhere closed 6 months ago

nathanhere commented 6 months ago

Stand alone install on Windows 11, including FineTuning requirements installed.

Logs:

ERROR:    Traceback (most recent call last):
  File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\starlette\routing.py", line 677, in lifespan
    async with self.lifespan_context(app) as maybe_state:
  File "C:\alltalk_tts\alltalk_environment\env\Lib\contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\alltalk_tts\tts_server.py", line 131, in startup_shutdown
    await setup()
  File "C:\alltalk_tts\tts_server.py", line 176, in setup
    model = await xtts_manual_load_model()
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\alltalk_tts\tts_server.py", line 247, in xtts_manual_load_model
    model.load_checkpoint(
  File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\TTS\tts\models\xtts.py", line 760, in load_checkpoint
    checkpoint = self.get_compatible_checkpoint_state_dict(model_path)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\TTS\tts\models\xtts.py", line 710, in get_compatible_checkpoint_state_dict
    checkpoint = load_fsspec(model_path, map_location=torch.device("cpu"))["model"]
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\TTS\utils\io.py", line 54, in load_fsspec
    return torch.load(f, map_location=map_location, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\torch\serialization.py", line 993, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\torch\serialization.py", line 447, in __init__
    super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

ERROR:    Application startup failed. Exiting.
nathanhere commented 6 months ago

Used "docker compose up" and it seemed to resolve this issue.