YYuX-1145 / Bert-VITS2-Integration-package

vits2 backbone with bert
https://www.bilibili.com/video/BV13p4y1d7v9
GNU Affero General Public License v3.0
327 stars 30 forks source link

有人遇到这个错误吗 assert (discriminant >= 0).all() #51

Open fword opened 7 months ago

fword commented 7 months ago

File "/root/Bert-VITS2-Integration-Package/train_ms.py", line 193, in run train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) File "/root/Bert-VITS2-Integration-Package/train_ms.py", line 329, in train_and_evaluate evaluate(hps, net_g, eval_loader, writer_eval) File "/root/Bert-VITS2-Integration-Package/train_ms.py", line 363, in evaluate yhat, attn, mask, * = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) File "/root/Bert-VITS2-Integration-Package/models.py", line 692, in infer logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) (sdp_ratio) + self.dp(x, x_mask, g=g) (1 - sdp_ratio) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/root/Bert-VITS2-Integration-Package/models.py", line 199, in forward z = flow(z, x_mask, g=x, reverse=reverse) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "/root/Bert-VITS2-Integration-Package/modules.py", line 374, in forward x1, logabsdet = piecewise_rational_quadratic_transform(x1, File "/root/Bert-VITS2-Integration-Package/transforms.py", line 33, in piecewise_rational_quadratic_transform outputs, logabsdet = spline_fn( File "/root/Bert-VITS2-Integration-Package/transforms.py", line 82, in unconstrained_rational_quadratic_spline outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( File "/root/Bert-VITS2-Integration-Package/transforms.py", line 164, in rational_quadratic_spline assert (discriminant >= 0).all() AssertionError

fword commented 7 months ago

Process 0 terminated with the following error: Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap fn(i, args) File "/root/Bert-VITS2-Integration-Package/train_ms.py", line 193, in run train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) File "/root/Bert-VITS2-Integration-Package/train_ms.py", line 231, in train_and_evaluate (z, z_p, m_p, logs_p, m_q, logs_q), (hiddenx, logw, logw) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1156, in forward output = self._run_ddp_forward(*inputs, *kwargs) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1110, in _run_ddp_forward return module_to_run(inputs[0], kwargs[0]) # type: ignore[index] File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/Bert-VITS2-Integration-Package/models.py", line 680, in forward z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) File "/root/Bert-VITS2-Integration-Package/commons.py", line 63, in rand_slice_segments ret = slice_segments(x, ids_str, segment_size) File "/root/Bert-VITS2-Integration-Package/commons.py", line 53, in slice_segments ret[i] = x[i, :, idx_str:idx_end] RuntimeError: The expanded size of the tensor (32) must match the existing size (0) at non-singleton dimension 1. Target sizes: [192, 32]. Tensor sizes: [192, 0]

littlecutenainia commented 6 months ago

同样遇到了,请问解决了吗

BangjianZhou commented 5 months ago

同样遇到了,请问原因找到了吗

MarkPoloChina commented 5 months ago

修改config.json里面的segment_size。默认文件里面的segment_size可能太大了,是hop_length的整倍数就可以了。可以从8192向下试。 #32

BangjianZhou commented 5 months ago

修改config.json里面的segment_size。默认文件里面的segment_size可能太大了,是hop_length的整倍数就可以了。可以从8192向下试。 #32

大佬,请教一下,我用segment_size: 8192,max_wav_value: 32768.0, sampling_rate: 22050, filter_length: 1024, hop_length: 256, win_length: 1024, n_mel_channels: 80, 这个设置是我训自己的vits1和p0p4k实现的vits2,https://github.com/p0p4k/vits2_pytorch 都用的设置,之前都没出现错误,现在加了bert后出错也是需要改小segment_size吗?是什么原因呢?

MarkPoloChina commented 5 months ago

修改config.json里面的segment_size。默认文件里面的segment_size可能太大了,是hop_length的整倍数就可以了。可以从8192向下试。 #32

大佬,请教一下,我用segment_size: 8192,max_wav_value: 32768.0, sampling_rate: 22050, filter_length: 1024, hop_length: 256, win_length: 1024, n_mel_channels: 80, 这个设置是我训自己的vits1和p0p4k实现的vits2,https://github.com/p0p4k/vits2_pytorch 都用的设置,之前都没出现错误,现在加了bert后出错也是需要改小segment_size吗?是什么原因呢?

bert-vits2是对segement_size有特别要求,但是具体原因我也不知道,目前来看可能和样本的切分长度有关系。在我的数据集上8192还不行,要4096才能跑。