jianchang512 / ChatTTS-ui

一个简单的本地网页界面,使用ChatTTS将文字合成为语音,同时支持对外提供API接口。A simple native web interface that uses ChatTTS to synthesize text into speech, along with support for external API interfaces.
https://pyvideotrans.com
Other
5.19k stars 570 forks source link

发生错误: INTERNAL SERVER ERROR #16

Open yhai3596 opened 1 month ago

yhai3596 commented 1 month ago

安装启动都正常,但是在合成声音的时候提示“发生错误: INTERNAL SERVER ERROR”

sp240531_154240

CrackerSW commented 1 month ago

兄弟看我的那个提问 MacOS生成报错

jianchang512 commented 1 month ago

黑色窗口中具体错误看下

jianchang512 commented 1 month ago

执行下 brew install libomp

2楼的解决方法

JiangLongLiu commented 1 month ago

windows 11 x64下,没有brew应该怎样解决?

jianchang512 commented 1 month ago

11 下应该不会有这个错误,可以试试 预打包版

admin8756 commented 1 month ago

我也有这个问题 发生错误: INTERNAL SERVER ERROR

CrackerSW commented 1 month ago

现在不是工作时间,待我上班立马回复!

yhai3596 commented 1 month ago

安装lipomp到c:\windows\system32后,还是报同样错误。 ERROR:app:Exception on /tts [POST] Traceback (most recent call last): File "D:\chattts\venv\lib\site-packages\flask\app.py", line 1473, in wsgi_app response = self.full_dispatch_request() File "D:\chattts\venv\lib\site-packages\flask\app.py", line 882, in full_dispatch_request rv = self.handle_user_exception(e) File "D:\chattts\venv\lib\site-packages\flask\app.py", line 880, in full_dispatch_request rv = self.dispatch_request() File "D:\chattts\venv\lib\site-packages\flask\app.py", line 865, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(view_args) # type: ignore[no-any-return] File "D:\chattts\app.py", line 118, in tts wavs = chat.infer([t for t in text.split("\n") if t.strip()], use_decoder=True,params_infer_code={'spk_emb': rand_spk} ,params_refine_text= {'prompt': prompt}) File "D:\chattts\ChatTTS\core.py", line 154, in infer text_tokens = refine_text(self.pretrain_models, text, params_refine_text)['ids'] File "D:\chattts\ChatTTS\infer\api.py", line 114, in refine_text result = models['gpt'].generate( File "D:\chattts\ChatTTS\model\gpt.py", line 203, in generate outputs = self.gpt.forward(model_input, output_attentions=return_attn) File "D:\chattts\venv\lib\site-packages\torch_dynamo\eval_frame.py", line 451, in _fn return fn(*args, *kwargs) File "D:\chattts\venv\lib\site-packages\transformers\models\llama\modeling_llama.py", line 940, in forward causal_mask = self._update_causal_mask( File "D:\chattts\venv\lib\site-packages\torch_dynamo\convert_frame.py", line 921, in catch_errors return callback(frame, cache_entry, hooks, frame_state, skip=1) File "D:\chattts\venv\lib\site-packages\torch_dynamo\convert_frame.py", line 786, in _convert_frame result = inner_convert( File "D:\chattts\venv\lib\site-packages\torch_dynamo\convert_frame.py", line 400, in _convert_frame_assert return _compile( File "D:\Python\Python39\lib\contextlib.py", line 79, in inner return func(args, kwds) File "D:\chattts\venv\lib\site-packages\torch_dynamo\convert_frame.py", line 676, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) File "D:\chattts\venv\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper r = func(*args, kwargs) File "D:\chattts\venv\lib\site-packages\torch_dynamo\convert_frame.py", line 535, in compile_inner out_code = transform_code_object(code, transform) File "D:\chattts\venv\lib\site-packages\torch_dynamo\bytecode_transformation.py", line 1036, in transform_code_object transformations(instructions, code_options) File "D:\chattts\venv\lib\site-packages\torch_dynamo\convert_frame.py", line 165, in _fn return fn(*args, *kwargs) File "D:\chattts\venv\lib\site-packages\torch_dynamo\convert_frame.py", line 500, in transform tracer.run() File "D:\chattts\venv\lib\site-packages\torch_dynamo\symbolic_convert.py", line 2149, in run super().run() File "D:\chattts\venv\lib\site-packages\torch_dynamo\symbolic_convert.py", line 810, in run and self.step() File "D:\chattts\venv\lib\site-packages\torch_dynamo\symbolic_convert.py", line 773, in step getattr(self, inst.opname)(inst) File "D:\chattts\venv\lib\site-packages\torch_dynamo\symbolic_convert.py", line 2268, in RETURN_VALUE self.output.compile_subgraph( File "D:\chattts\venv\lib\site-packages\torch_dynamo\output_graph.py", line 971, in compile_subgraph self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root) File "D:\Python\Python39\lib\contextlib.py", line 79, in inner return func(args, kwds) File "D:\chattts\venv\lib\site-packages\torch_dynamo\output_graph.py", line 1168, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) File "D:\chattts\venv\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper r = func(*args, kwargs) File "D:\chattts\venv\lib\site-packages\torch_dynamo\output_graph.py", line 1241, in call_user_compiler raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( File "D:\chattts\venv\lib\site-packages\torch_dynamo\output_graph.py", line 1222, in call_user_compiler compiled_fn = compiler_fn(gm, self.example_inputs()) File "D:\chattts\venv\lib\site-packages\torch_dynamo\repro\after_dynamo.py", line 117, in debug_wrapper compiled_gm = compiler_fn(gm, example_inputs) File "D:\chattts\venv\lib\site-packages\torch__init.py", line 1729, in call__ return compilefx(model, inputs_, config_patches=self.config) File "D:\Python\Python39\lib\contextlib.py", line 79, in inner return func(args, kwds) File "D:\chattts\venv\lib\site-packages\torch_inductor\compile_fx.py", line 1330, in compile_fx return aot_autograd( File "D:\chattts\venv\lib\site-packages\torch_dynamo\backends\common.py", line 58, in compiler_fn cg = aot_module_simplified(gm, example_inputs, kwargs) File "D:\chattts\venv\lib\site-packages\torch_functorch\aot_autograd.py", line 903, in aot_module_simplified compiled_fn = create_aot_dispatcher_function( File "D:\chattts\venv\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper r = func(args, kwargs) File "D:\chattts\venv\lib\site-packages\torch_functorch\aot_autograd.py", line 628, in create_aot_dispatcher_function compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) File "D:\chattts\venv\lib\site-packages\torch_functorch_aot_autograd\runtime_wrappers.py", line 443, in aot_wrapper_dedupe return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) File "D:\chattts\venv\lib\site-packages\torch_functorch_aot_autograd\runtime_wrappers.py", line 648, in aot_wrapper_synthetic_base return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) File "D:\chattts\venv\lib\site-packages\torch_functorch_aot_autograd\jit_compile_runtime_wrappers.py", line 119, in aot_dispatch_base compiled_fw = compiler(fw_module, updated_flat_args) File "D:\chattts\venv\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper r = func(*args, kwargs) File "D:\chattts\venv\lib\site-packages\torch_inductor\compile_fx.py", line 1257, in fw_compiler_base return inner_compile( File "D:\chattts\venv\lib\site-packages\torch_dynamo\repro\after_aot.py", line 83, in debug_wrapper inner_compiled_fn = compiler_fn(gm, example_inputs) File "D:\chattts\venv\lib\site-packages\torch_inductor\debug.py", line 304, in inner return fn(*args, *kwargs) File "D:\Python\Python39\lib\contextlib.py", line 79, in inner return func(args, kwds) File "D:\Python\Python39\lib\contextlib.py", line 79, in inner return func(*args, kwds) File "D:\chattts\venv\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper r = func(*args, *kwargs) File "D:\chattts\venv\lib\site-packages\torch_inductor\compile_fx.py", line 438, in compile_fx_inner compiled_graph = fx_codegen_and_compile( File "D:\chattts\venv\lib\site-packages\torch_inductor\compile_fx.py", line 714, in fx_codegen_and_compile compiled_fn = graph.compile_to_fn() File "D:\chattts\venv\lib\site-packages\torch_inductor\graph.py", line 1307, in compile_to_fn return self.compile_to_module().call File "D:\chattts\venv\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper r = func(args, kwargs) File "D:\chattts\venv\lib\site-packages\torch_inductor\graph.py", line 1250, in compile_to_module self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen() File "D:\chattts\venv\lib\site-packages\torch_inductor\graph.py", line 1208, in codegen self.scheduler.codegen() File "D:\chattts\venv\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper r = func(*args, **kwargs) File "D:\chattts\venv\lib\site-packages\torch_inductor\scheduler.py", line 2339, in codegen self.get_backend(device).codegen_nodes(node.get_nodes()) # type: ignore[possibly-undefined] File "D:\chattts\venv\lib\site-packages\torch_inductor\codegen\cpp.py", line 3623, in codegen_nodes kernel_group.finalize_kernel(cpp_kernel_proxy, nodes) File "D:\chattts\venv\lib\site-packages\torch_inductor\codegen\cpp.py", line 3661, in finalize_kernel new_kernel.codegen_loops(code, ws) File "D:\chattts\venv\lib\site-packages\torch_inductor\codegen\cpp.py", line 3458, in codegen_loops self.codegen_loops_impl(self.loop_nest, code, worksharing) File "D:\chattts\venv\lib\site-packages\torch_inductor\codegen\cpp.py", line 1832, in codegen_loops_impl gen_loops(loop_nest.root) File "D:\chattts\venv\lib\site-packages\torch_inductor\codegen\cpp.py", line 1804, in gen_loops gen_loop(loop, in_reduction) File "D:\chattts\venv\lib\site-packages\torch_inductor\codegen\cpp.py", line 1817, in gen_loop loop_lines = loop.lines() File "D:\chattts\venv\lib\site-packages\torch_inductor\codegen\cpp.py", line 3922, in lines elif not self.reduction_var_map and codecache.is_gcc(): File "D:\chattts\venv\lib\site-packages\torch_inductor\codecache.py", line 1001, in is_gcc return bool(re.search(r"(gcc|g++)", cpp_compiler())) File "D:\chattts\venv\lib\site-packages\torch_inductor\codecache.py", line 944, in cpp_compiler return cpp_compiler_search(search) File "D:\chattts\venv\lib\site-packages\torch_inductor\codecache.py", line 971, in cpp_compiler_search raise exc.InvalidCxxCompiler() torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: InvalidCxxCompiler: No working C++ compiler found in torch._inductor.config.cpp.cxx: (None, 'g++')

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True

yhai3596 commented 1 month ago

11 下应该不会有这个错误,可以试试 预打包版

我再试下预打包版

谢谢!

yhai3596 commented 1 month ago

11 下应该不会有这个错误,可以试试 预打包版

我再试下预打包版

谢谢!

看了下系统是Windows 10 专业版 这个是不是要采用其它方法