6drf21e / ChatTTS_colab

🚀 一键部署(含离线整合包)!基于 ChatTTS ,支持流式输出、音色抽卡、长音频生成和分角色朗读。简单易用,无需复杂安装。
1.52k stars 191 forks source link

流式推理报错,这个怎么处理 #49

Open GAOTAO04 opened 1 week ago

GAOTAO04 commented 1 week ago

To create a public link, set share=True in launch(). result ['四川美食确实以辣闻名,但也有不辣的选择。比如甜水面、赖汤圆、蛋烘糕、叶儿粑等,这些小吃口味温和,甜而不腻,也很 受欢迎。'] INFO:root:found existing fst: D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\tn\zh_tn_tagger.fst INFO:root: D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\tn\zh_tn_verbalizer.fst INFO:root:skip building fst for zh_normalizer ... Building prefix dict from the default dictionary ... DEBUG:jieba:Building prefix dict from the default dictionary ... Loading model from cache C:\Users\Administrator\AppData\Local\Temp\jieba.cache DEBUG:jieba:Loading model from cache C:\Users\Administrator\AppData\Local\Temp\jieba.cache Loading model cost 0.712 seconds. DEBUG:jieba:Loading model cost 0.712 seconds. Prefix dict has been built successfully. DEBUG:jieba:Prefix dict has been built successfully. speaker_type: seed Traceback (most recent call last): File "D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\gradio\queueing.py", line 521, in process_events response = await route_utils.call_process_api( File "D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\gradio\route_utils.py", line 276, in call_process_api output = await app.get_blocks().process_api( File "D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\gradio\blocks.py", line 1945, in process_api result = await self.call_function( File "D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\gradio\blocks.py", line 1513, in call_function prediction = await anyio.to_thread.run_sync( File "D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, args) File "D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\gradio\utils.py", line 831, in wrapper response = f(args, **kwargs) File "D:\AI\ChatTTS_colab_offline\webui_mix.py", line 299, in generate_tts_audio raise e File "D:\AI\ChatTTS_colab_offline\webui_mix.py", line 280, in generate_tts_audio output_files = generate_audio_for_seed( File "D:\AI\ChatTTS_colab_offline\tts_model.py", line 110, in generate_audio_for_seed _params_infer_code = deepcopy(params_infer_code) File "D:\AI\ChatTTS_colab_offline\runtime\lib\copy.py", line 146, in deepcopy y = copier(x, memo) File "D:\AI\ChatTTS_colab_offline\runtime\lib\copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "D:\AI\ChatTTS_colab_offline\runtime\lib\copy.py", line 153, in deepcopy y = copier(memo) File "D:\AI\ChatTTS_colab_offline\runtime\lib\site-packages\torch_tensor.py", line 86, in deepcopy raise RuntimeError( RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment. If you were attempting to deepcopy a module, this may be because of a torch.nn.utils.weight_norm usage, see https://github.com/pytorch/pytorch/pull/103001

6drf21e commented 1 week ago

已经修复 下载 0.0.6增量升级包(0.0.4版本前的不支持)fix.zip 覆盖即可