yxlllc / DDSP-SVC

Real-time end-to-end singing voice conversion system based on DDSP (Differentiable Digital Signal Processing)
MIT License
1.91k stars 252 forks source link

Error in the inference process #97

Open rlatlqkf opened 1 year ago

rlatlqkf commented 1 year ago

Traceback (most recent call last): File "D:\mnt\0)DDSP-SVC\main.py", line 261, in segoutput, , (s_h, s_n) = model(seg_units, seg_f0, seg_volume, spk_id = spk_id, spk_mix_dict = spk_mix_dict) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "D:\mnt\0)DDSP-SVC\ddsp\vocoder.py", line 628, in forward ctrls, hidden = self.unit2ctrl(units_frames, f0_frames, phase_frames, volume_frames, spk_id=spk_id, spk_mix_dict=spk_mix_dict, aug_shift=aug_shift) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "D:\mnt\0)DDSP-SVC\ddsp\unit2control.py", line 78, in forward x = self.stack(units.transpose(1,2)).transpose(1,2) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\container.py", line 215, in forward input = module(input) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, **kwargs) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\conv.py", line 310, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\Users\wasan\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\conv.py", line 306, in _conv_forward return F.conv1d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [256, 768, 3], expected input[1, 256, 450] to have 768 channels, but got 256 channels instead

Is there a way to fix this?

yxlllc commented 1 year ago

Maybe your model does not match the unit encoder. Check whether the configuration file matches or has been modified.