Closed oceanio closed 1 month ago
解决了么
用我的办法试试,在requirements中指定torch==2.1.2版本,然后重新运行下pip3 install -r requirements.txt 我的这样解决了
用我的办法试试,在requirements中指定torch==2.1.2版本,然后重新运行下pip3 install -r requirements.txt 我的这样解决了
@SirZcom 你原来的torch是哪个版本? 2.1.3吗 换成2.1.2是不是 torchvision也会跟着换
用我的办法试试,在requirements中指定torch==2.1.2版本,然后重新运行下pip3 install -r requirements.txt 我的这样解决了
@SirZcom 你原来的torch是哪个版本? 2.1.3吗 换成2.1.2是不是 torchvision也会跟着换
应该是比较高,把版本列出来一个个试的,先试试看行不行
Had same issue, found out after hours of trial and realised that the temperature value can't be 0 寄吧猫,同样报错调了半天发现把温度设0就跑不起来
This issue was closed because it has been inactive for 15 days since being marked as stale.
运行最基本的infer,出现下面问题,请问怎么解决?
RuntimeError Traceback (most recent call last) Cell In[5], line 1 ----> 1 wavs = chat.infer(texts, use_decoder=True)
File /mnt/st2/zhangyang/ChatTTS/ChatTTS/core.py:132, in Chat.infer(self, text, skip_refine_text, refine_text_only, params_refine_text, params_infer_code, use_decoder) 129 assert self.check_model(use_decoder=use_decoder) 131 if not skip_refine_text: --> 132 text_tokens = refine_text(self.pretrain_models, text, **params_refine_text)['ids'] 133 text_tokens = [i[i < self.pretrain_models['tokenizer'].convert_tokens_to_ids('[break_0]')] for i in text_tokens] 134 text = self.pretrain_models['tokenizer'].batch_decode(text_tokens)
File /mnt/st2/zhangyang/ChatTTS/ChatTTS/infer/api.py:114, in refine_text(models, text, top_P, top_K, temperature, repetition_penalty, max_new_token, prompt, kwargs) 111 if repetition_penalty is not None and repetition_penalty != 1: 112 LogitsProcessors.append(CustomRepetitionPenaltyLogitsProcessorRepeat(repetition_penalty, len(models['tokenizer']), 16)) --> 114 result = models['gpt'].generate( 115 models['gpt'].get_emb(inputs), inputs['input_ids'], 116 temperature = torch.tensor([temperature,], device=device), 117 attention_mask = inputs['attention_mask'], 118 LogitsWarpers = LogitsWarpers, 119 LogitsProcessors = LogitsProcessors, 120 eos_token = torch.tensor(models['tokenizer'].convert_tokens_to_ids('[Ebreak]'), device=device)[None], 121 max_new_token = max_new_token, 122 infer_text = True, 123 **kwargs 124 ) 125 return result
File /mnt/st2/zhangyang/ChatTTS/ChatTTS/model/gpt.py:236, in GPT_warpper.generate(self, emb, inputs_ids, temperature, eos_token, attention_mask, max_new_token, min_new_token, LogitsWarpers, LogitsProcessors, infer_text, return_attn, return_hidden) 232 logits[:, eos_token] = -torch.inf 234 scores = F.softmax(logits, dim=-1) --> 236 idx_next = torch.multinomial(scores, num_samples=1) 238 if not infer_text: 239 idx_next = rearrange(idx_next, "(b n) 1 -> b n", n=self.num_vq)
RuntimeError: probability tensor contains either
inf
,nan
or element < 0