Open wlc952 opened 1 month ago
还有一个错误:
G:\GPT-SoVITS\.venv\Lib\site-packages\gradio_client\documentation.py:103: UserWarning: Could not get documentation group
for <class 'gradio.mix.Parallel'>: No known documentation group for module 'gradio.mix'
warnings.warn(f"Could not get documentation group for {cls}: {exc}")
G:\GPT-SoVITS\.venv\Lib\site-packages\gradio_client\documentation.py:103: UserWarning: Could not get documentation group
for <class 'gradio.mix.Series'>: No known documentation group for module 'gradio.mix'
warnings.warn(f"Could not get documentation group for {cls}: {exc}")
------------------------------------------------------------------------------------------------------------------------
onnx_export2.py 332 <module>
export(vits_path, gpt_path, exp_path)
onnx_export2.py 271 export
vits = VitsModel(vits_path)
onnx_export2.py 201 __init__
self.vq_model = SynthesizerTrn(
models_onnx.py 842 __init__
self.enc_p = TextEncoder(
models_onnx.py 210 __init__
self.text_embedding = nn.Embedding(len(symbols), hidden_channels)
TypeError:
object of type 'module' has no len()
几个月前我导出v1模型是正常的,现在v2模型无法导出。
PS D:\Documents\GPT-SoVITS-beta0706> & d:/Documents/GPT-SoVITS-beta0706/runtime/python.exe d:/Documents/GPT-SoVITS-beta0706/GPT_SoVITS/onnx_export.py
export(vits_path, gpt_path, exp_path)
File "d:\Documents\GPT-SoVITS-beta0706\GPT_SoVITS\onnx_export.py", line 302, in export
a = gpt_sovits(ref_seq, text_seq, ref_bert, text_bert, ref_audio_sr, ssl_content).detach().cpu().numpy()
File "D:\Documents\GPT-SoVITS-beta0706\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "d:\Documents\GPT-SoVITS-beta0706\GPT_SoVITS\onnx_export.py", line 232, in forward
pred_semantic = self.t2s(ref_seq, text_seq, ref_bert, text_bert, ssl_content)
File "D:\Documents\GPT-SoVITS-beta0706\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "d:\Documents\GPT-SoVITS-beta0706\GPT_SoVITS\onnx_export.py", line 118, in forward
y, k, v, y_emb, x_example = self.first_stage_decoder(x, prompts)
File "D:\Documents\GPT-SoVITS-beta0706\runtime\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "D:\Documents\GPT-SoVITS-beta0706./GPT_SoVITS\AR\models\t2s_model_onnx.py", line 155, in forward
samples = sample(logits[0], y, top_k=self.top_k, top_p=1.0, repetition_penalty=1.35)[0].unsqueeze(0)
File "D:\Documents\GPT-SoVITS-beta0706./GPT_SoVITS\AR\models\t2s_model_onnx.py", line 81, in sample
probs = logits_to_probs(
File "D:\Documents\GPT-SoVITS-beta0706./GPT_SoVITS\AR\models\t2s_model_onnx.py", line 61, in logits_toprobs
v, = torch.topk(logits, top_k)
TypeError: topk(): argument 'k' (position 2) must be int, not Tensor
kmeans start ... 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:25<00:00, 1.96it/s] Traceback (most recent call last): File "d:\Documents\GPT-SoVITS-beta0706\GPT_SoVITS\onnx_export.py", line 335, in
这里好像也不能强制转成int哎,怎么回事呢