jianchang512 / ChatTTS-ui

一个简单的本地网页界面,使用ChatTTS将文字合成为语音,同时支持对外提供API接口。A simple native web interface that uses ChatTTS to synthesize text into speech, along with support for external API interfaces.
https://pyvideotrans.com
Other
5.1k stars 556 forks source link

0.3win打包版好像不支持gpu加速了 #25

Closed lin16303 closed 1 month ago

lin16303 commented 1 month ago

0.2版速度正常 70it/s

Downloading: 100%|████████████████████████████████████████████████████████████████████████| 4.16k/4.16k [00:00<?, ?B/s] INFO:ChatTTS.core:Load from local: E:/BaiduNetdiskDownload/ChatTTS-UI-0.3/models\pzc163\chatTTS INFO:ChatTTS.core:use cuda:0 INFO:ChatTTS.core:vocos loaded. INFO:ChatTTS.core:dvae loaded. INFO:ChatTTS.core:gpt loaded. INFO:ChatTTS.core:decoder loaded. INFO:ChatTTS.core:tokenizer loaded. INFO:ChatTTS.core:All initialized. 启动:['127.0.0.1', '9966'] 0%| | 0/384 [00:00<?, ?it/s]torch_dynamo\utils.py:1764: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) return node.target(*args, *kwargs) torch_inductor\compile_fx.py:124: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting torch.set_float32_matmul_precision('high') for better performance. warnings.warn( W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] WON'T CONVERT forward transformers\models\llama\modeling_llama.py line 892 W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] due to: W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] Traceback (most recent call last): W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\convert_frame.py", line 786, in _convert_frame W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] result = inner_convert( W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\convert_frame.py", line 400, in _convert_frame_assert W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] return _compile( W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "contextlib.py", line 79, in inner W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\convert_frame.py", line 676, in _compile W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] guarded_code = compile_inner(code, one_graph, hooks, transform) W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\utils.py", line 262, in time_wrapper W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] r = func(args, **kwargs) W0601 10:54:51.761023 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\convert_frame.py", line 535, in compile_inner

W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] inner_compiled_fn = compiler_fn(gm, example_inputs) W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\debug.py", line 304, in inner W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] return fn(*args, kwargs) W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "contextlib.py", line 79, in inner W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "contextlib.py", line 79, in inner W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\utils.py", line 262, in time_wrapper W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] r = func(*args, *kwargs) W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\compile_fx.py", line 438, in compile_fx_inner W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] compiled_graph = fx_codegen_and_compile( W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\compile_fx.py", line 714, in fx_codegen_and_compile W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] compiled_fn = graph.compile_to_fn() W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\graph.py", line 1307, in compile_to_fn W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] return self.compile_to_module().call W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\utils.py", line 262, in time_wrapper W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] r = func(args, kwargs) W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\graph.py", line 1250, in compile_to_module W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen() W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\graph.py", line 1205, in codegen W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self.scheduler = Scheduler(self.buffers) W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_dynamo\utils.py", line 262, in time_wrapper W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] r = func(*args, **kwargs) W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 1267, in init W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self.nodes = [self.create_scheduler_node(n) for n in nodes] W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 1267, in W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self.nodes = [self.create_scheduler_node(n) for n in nodes] W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 1358, in create_scheduler_node W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] return SchedulerNode(self, node) W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 687, in init W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self._compute_attrs() W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 698, in _compute_attrs W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] group_fn = self.scheduler.get_backend(self.node.get_device()).group_fn W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 2276, in get_backend W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] self.backends[device] = self.create_backend(device) W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] File "torch_inductor\scheduler.py", line 2268, in create_backend W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] raise RuntimeError( W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] RuntimeError: Cannot find a working triton installation. More information on installing Triton can be found at https://github.com/openai/triton W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information W0601 10:54:56.990190 7588 ..\torch_dynamo\convert_frame.py:824] 3%|██ | 10/384 [00:21<13:15, 2.13s/it] 2%|█▉ | 51/2048 [00:08<05:33, 5.99it/s] error.txt

jianchang512 commented 1 month ago

INFO:ChatTTS.core:use cuda:0

lin16303 commented 1 month ago

z这是0.2版 2024-06-01 13:52:12,166 - modelscope - INFO - PyTorch version 2.3.0+cu118 Found. 2024-06-01 13:52:12,166 - modelscope - INFO - Loading ast index from C:\Users\Alex.cache\modelscope\ast_indexer 2024-06-01 13:52:12,170 - modelscope - INFO - Loading done! Current index file version is 1.14.0, with md5 d41d8cd98f00b204e9800998ecf8427e and a total number of 0 components indexed INFO:ChatTTS.core:Load from local: E:/BaiduNetdiskDownload/ChatTTS-UI-0.2/models\pzc163\chatTTS INFO:ChatTTS.core:use cuda:0 INFO:ChatTTS.core:vocos loaded. INFO:ChatTTS.core:dvae loaded. INFO:ChatTTS.core:gpt loaded. INFO:ChatTTS.core:decoder loaded. INFO:ChatTTS.core:tokenizer loaded. INFO:ChatTTS.core:All initialized. 启动:['127.0.0.1', '9966'] 0%| | 0/384 [00:00<?, ?it/s]transformers\models\llama\modeling_llama.py:649: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) attn_output = torch.nn.functional.scaled_dot_product_attention( 2%|█▉ | 9/384 [00:00<00:40, 9.28it/s] 5%|███▋ | 95/2048 [00:01<00:27, 70.91it/s]

jianchang512 commented 1 month ago

use cuda:0

这不是都提示了在使用 cuda了吗

jianchang512 commented 1 month ago

在使用,只是速度似乎降低了,多试几次看看是否偶发

lin16303 commented 1 month ago

是不是和这个理的报错有关,.2版没有报错信息 W0601 15:01:30.228261 4816 ..\torch_dynamo\convert_frame.py:824] torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: W0601 15:01:30.228261 4816 ..\torch_dynamo\convert_frame.py:824] RuntimeError: Cannot find a working triton installation. More information on installing Triton can be found at https://github.com/openai/triton W0601 15:01:30.228261 4816 ..\torch_dynamo\convert_frame.py:824] W0601 15:01:30.228261 4816 ..\torch_dynamo\convert_frame.py:824] Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information W0601 15:01:30.228261 4816 ..\torch_dynamo\convert_frame.py:824] 4%|███▍ | 16/384 [00:22<08:39, 1.41s/it] 5%|███▋ | 93/2048 [00:15<05:20, 6.09it/s]