fishaudio / fish-speech

Brand new TTS solution
https://speech.fish.audio
Other
13.4k stars 1k forks source link

[BUG] Error 在打开推理页面时 #459

Closed Jason-JP-Yang closed 1 month ago

Jason-JP-Yang commented 2 months ago

使用Window WebUI 打开推理页面 image 但是生成音频时遇到错误: image

2024-08-11 15:49:04.772 | INFO     | tools.llama.generate:generate_long:508 - Bandwidth achieved: 16.93 GB/s
2024-08-11 15:49:04.773 | INFO     | tools.llama.generate:generate_long:513 - GPU Memory used: 1.42 GB
2024-08-11 15:49:04.778 | INFO     | tools.api:decode_vq_tokens:128 - VQ features: torch.Size([4, 271])
2024-08-11 15:49:05.325 | INFO     | __main__:<module>:555 - Warming up done, launching the web UI...
Running on local URL:  http://127.0.0.1:7862

To create a public link, set `share=True` in `launch()`.
You are using the latest version of funasr-1.1.4
Download: iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch failed!: No module named 'modelscope'
Traceback (most recent call last):
  File "D:\fish-speech-1.2\fishenv\env\lib\site-packages\gradio\queueing.py", line 536, in process_events
    response = await route_utils.call_process_api(
  File "D:\fish-speech-1.2\fishenv\env\lib\site-packages\gradio\route_utils.py", line 288, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\fish-speech-1.2\fishenv\env\lib\site-packages\gradio\blocks.py", line 1931, in process_api
    result = await self.call_function(
  File "D:\fish-speech-1.2\fishenv\env\lib\site-packages\gradio\blocks.py", line 1516, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "D:\fish-speech-1.2\fishenv\env\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "D:\fish-speech-1.2\fishenv\env\lib\site-packages\anyio\_backends\_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "D:\fish-speech-1.2\fishenv\env\lib\site-packages\anyio\_backends\_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "D:\fish-speech-1.2\fishenv\env\lib\site-packages\gradio\utils.py", line 826, in wrapper
    response = f(*args, **kwargs)
  File "D:\fish-speech-1.2\tools\webui.py", line 264, in inference_wrapper
    result = inference_with_auto_rerank(
  File "D:\fish-speech-1.2\tools\webui.py", line 193, in inference_with_auto_rerank
    zh_model, en_model = load_model()
  File "D:\fish-speech-1.2\tools\auto_rerank.py", line 15, in load_model
    zh_model = AutoModel(
  File "D:\fish-speech-1.2\fishenv\env\lib\site-packages\funasr\auto\auto_model.py", line 124, in __init__
    model, kwargs = self.build_model(**kwargs)
  File "D:\fish-speech-1.2\fishenv\env\lib\site-packages\funasr\auto\auto_model.py", line 218, in build_model
    assert model_class is not None, f'{kwargs["model"]} is not registered'
Jason-JP-Yang commented 2 months ago

是不是自己到conda里面下载modelscope就能解决了????Let me try

AnyaCoder commented 2 months ago

不要用release,请clone / download zip 使用最新main分支代码。