RVC-Boss / GPT-SoVITS

1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
MIT License
33.48k stars 3.84k forks source link

fast_inference_的api2.py流式推理出错 #1153

Open zhangshuhui241 opened 4 months ago

zhangshuhui241 commented 4 months ago

我在最新版本上的fast_inference_分支上,应用api2.py来开启tts服务。 我在get请求中设定streaming_mode=false的时候,服务运作正常。 当我在get请求中设定streaming_mode=true的时候,服务出错。错误提示跟异步操作与cuda相关。 详细错误信息如下:

image

INFO: 14.145.223.238:0 - "GET /tts?text=%E5%85%88%E5%B8%9D%E5%88%9B%E4%B8%9A%E6%9C%AA%E5%8D%8A%E8%80%8C%E4%B8%AD%E9%81%93%E5%B4%A9%E6%AE%82%EF%BC%8C%E4%BB%8A%E5%A4%A9%E4%B8%8B%E4%B8%89%E5%88%86%EF%BC%8C%E7%9B%8A%E5%B7%9E%E7%96%B2%E5%BC%8A.&text_language=zh&streaming_mode=true&text_lang=zh&ref_audio_path=refvoices/bazong/whattimeisit.wav&prompt_text=%E6%B2%A1%E6%83%B3%E5%88%B0%E4%BD%A0%E5%85%88%E7%BB%99%E4%BA%86%E6%88%91%E4%B8%80%E4%B8%AA%E6%84%8F%E5%A4%96%EF%BC%8C%E7%9F%A5%E9%81%93%E7%8E%B0%E5%9C%A8%E5%87%A0%E7%82%B9%E4%BA%86%E4%B9%88 HTTP/1.1" 200 OK Set seed to 965880117 ERROR: Exception in ASGI application Traceback (most recent call last): File "/root/autodl-tmp/envs/sovits/lib/python3.9/site-packages/starlette/responses.py", line 265, in call await wrap(partial(self.listen_for_disconnect, receive)) File "/root/autodl-tmp/envs/sovits/lib/python3.9/site-packages/starlette/responses.py", line 261, in wrap await func() File "/root/autodl-tmp/envs/sovits/lib/python3.9/site-packages/starlette/responses.py", line 238, in listen_for_disconnect message = await receive() File "/root/autodl-tmp/envs/sovits/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 568, in receive await self.message_event.wait() File "/root/autodl-tmp/envs/sovits/lib/python3.9/asyncio/locks.py", line 226, in wait await fut asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f0293050310

During handling of the above exception, another exception occurred:

zhangshuhui241 commented 4 months ago

补充信息: 我用浏览器做的get请求测试。 我在主分支上用api.py尝试了流式,运行正常,但是实际速度太慢,体感还不如fast_inference_分支上的非流式快。 我尝试在fast_inference_分支上使用api.py开启流式服务,能够正常开启,但是get请求后出错,错误信息如下: INFO: Number of parameter: 77.49M INFO: Started server process [102749] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:6006 (Press CTRL+C to quit) INFO: 14.145.223.238:0 - "GET /tts?text=%E5%85%88%E5%B8%9D%E5%88%9B%E4%B8%9A%E6%9C%AA%E5%8D%8A%E8%80%8C%E4%B8%AD%E9%81%93%E5%B4%A9%E6%AE%82%EF%BC%8C%E4%BB%8A%E5%A4%A9%E4%B8%8B%E4%B8%89%E5%88%86%EF%BC%8C%E7%9B%8A%E5%B7%9E%E7%96%B2%E5%BC%8A.&text_language=zh&streaming_mode=false&text_lang=zh&ref_audio_path=refvoices/bazong/whattimeisit.wav&prompt_text=%E6%B2%A1%E6%83%B3%E5%88%B0%E4%BD%A0%E5%85%88%E7%BB%99%E4%BA%86%E6%88%91%E4%B8%80%E4%B8%AA%E6%84%8F%E5%A4%96%EF%BC%8C%E7%9F%A5%E9%81%93%E7%8E%B0%E5%9C%A8%E5%87%A0%E7%82%B9%E4%BA%86%E4%B9%88 HTTP/1.1" 404 Not Found INFO: 14.145.223.238:0 - "GET /?text=%E5%85%88%E5%B8%9D%E5%88%9B%E4%B8%9A%E6%9C%AA%E5%8D%8A%E8%80%8C%E4%B8%AD%E9%81%93%E5%B4%A9%E6%AE%82%EF%BC%8C%E4%BB%8A%E5%A4%A9%E4%B8%8B%E4%B8%89%E5%88%86%EF%BC%8C%E7%9B%8A%E5%B7%9E%E7%96%B2%E5%BC%8A.&text_language=zh&streaming_mode=false&text_lang=zh&ref_audio_path=refvoices/bazong/whattimeisit.wav&prompt_text=%E6%B2%A1%E6%83%B3%E5%88%B0%E4%BD%A0%E5%85%88%E7%BB%99%E4%BA%86%E6%88%91%E4%B8%80%E4%B8%AA%E6%84%8F%E5%A4%96%EF%BC%8C%E7%9F%A5%E9%81%93%E7%8E%B0%E5%9C%A8%E5%87%A0%E7%82%B9%E4%BA%86%E4%B9%88 HTTP/1.1" 200 OK INFO: 14.145.223.238:0 - "GET /?text=%E5%85%88%E5%B8%9D%E5%88%9B%E4%B8%9A%E6%9C%AA%E5%8D%8A%E8%80%8C%E4%B8%AD%E9%81%93%E5%B4%A9%E6%AE%82%EF%BC%8C%E4%BB%8A%E5%A4%A9%E4%B8%8B%E4%B8%89%E5%88%86%EF%BC%8C%E7%9B%8A%E5%B7%9E%E7%96%B2%E5%BC%8A.&text_language=zh&streaming_mode=false&text_lang=zh&ref_audio_path=refvoices/bazong/whattimeisit.wav&prompt_text=%E6%B2%A1%E6%83%B3%E5%88%B0%E4%BD%A0%E5%85%88%E7%BB%99%E4%BA%86%E6%88%91%E4%B8%80%E4%B8%AA%E6%84%8F%E5%A4%96%EF%BC%8C%E7%9F%A5%E9%81%93%E7%8E%B0%E5%9C%A8%E5%87%A0%E7%82%B9%E4%BA%86%E4%B9%88 HTTP/1.1" 200 OK Building prefix dict from the default dictionary ... DEBUG:jieba_fast:Building prefix dict from the default dictionary ... Loading model from cache /tmp/jieba.cache DEBUG:jieba_fast:Loading model from cache /tmp/jieba.cache Loading model cost 0.702 seconds. DEBUG:jieba_fast:Loading model cost 0.702 seconds. Prefix dict has been built succesfully. DEBUG:jieba_fast:Prefix dict has been built succesfully. ERROR: Exception in ASGI application

During handling of the above exception, another exception occurred:

lan99mu commented 4 months ago

大佬有解决方案了没