jianchang512 / ChatTTS-ui

一个简单的本地网页界面,使用ChatTTS将文字合成为语音,同时支持对外提供API接口。A simple native web interface that uses ChatTTS to synthesize text into speech, along with support for external API interfaces.
https://pyvideotrans.com
Other
4.98k stars 541 forks source link

文本过长会报错 #119

Open wjzdw007 opened 3 weeks ago

wjzdw007 commented 3 weeks ago
rv = self.handle_user_exception(e)

File "D:\ProgramData\anaconda3\envs\chattts\lib\site-packages\flask\app.py", line 880, in full_dispatch_request rv = self.dispatch_request() File "D:\ProgramData\anaconda3\envs\chattts\lib\site-packages\flask\app.py", line 865, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(view_args) # type: ignore[no-any-return] File "C:\Users\Administrator\Desktop\chattts\chatTTS-ui\app.py", line 170, in tts wavs = chat.infer(new_text, use_decoder=True, skip_refine_text=True if int(skip_refine)==1 else False,params_infer_code={ File "C:\Users\Administrator\Desktop\chattts\chatTTS-ui\ChatTTS\core.py", line 166, in infer result = infer_code(self.pretrain_models, text, params_infer_code, return_hidden=use_decoder) File "C:\Users\Administrator\Desktop\chattts\chatTTS-ui\ChatTTS\infer\api.py", line 60, in infer_code result = models['gpt'].generate( File "C:\Users\Administrator\Desktop\chattts\chatTTS-ui\ChatTTS\model\gpt.py", line 203, in generate outputs = self.gpt.forward(model_input, output_attentions=return_attn) File "D:\ProgramData\anaconda3\envs\chattts\lib\site-packages\transformers\models\llama\modeling_llama.py", line 968, in forward layer_outputs = decoder_layer( File "D:\ProgramData\anaconda3\envs\chattts\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\ProgramData\anaconda3\envs\chattts\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "D:\ProgramData\anaconda3\envs\chattts\lib\site-packages\transformers\models\llama\modeling_llama.py", line 713, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "D:\ProgramData\anaconda3\envs\chattts\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\ProgramData\anaconda3\envs\chattts\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, **kwargs) File "D:\ProgramData\anaconda3\envs\chattts\lib\site-packages\transformers\models\llama\modeling_llama.py", line 629, in forward key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs) File "D:\ProgramData\anaconda3\envs\chattts\lib\site-packages\transformers\cache_utils.py", line 156, in update self.value_cache[layer_idx] = torch.cat([self.value_cache[layer_idx], value_states], dim=-2) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 144.00 MiB. GPU

gakkiox commented 2 weeks ago

大哥,现在这个最长支持多少字的中文朗读呀?