sigsep / open-unmix-pytorch

Open-Unmix - Music Source Separation for PyTorch
https://sigsep.github.io/open-unmix/
MIT License
1.23k stars 181 forks source link

RuntimeError: "reflection_pad1d_out_template" not implemented for 'Short' : when using separate(...) method #119

Open deepakpawade opened 2 years ago

deepakpawade commented 2 years ago

estimates = separate(audio=mix_torch, targets=['podcasts'], model_str_or_path='../scripts/open-unmix-512', device='cuda', rate = rate ) stackstrace : RuntimeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_14992/3837081952.py in ----> 1 estimates = separate(audio=mix_torch, 2 targets=['podcasts'], 3 model_str_or_path='../scripts/open-unmix-512', 4 device='cuda', 5 rate = rate

d:\InterferenceSeperation\umx_demo\openunmix\predict.py in separate(audio, rate, model_str_or_path, targets, niter, residual, wiener_win_len, aggregate_dict, separator, device, filterbank) 76 77 # getting the separated signals ---> 78 estimates = separator(audio) 79 estimates = separator.to_dict(estimates, aggregate_dict=aggregate_dict) 80 return estimates

c:\Users\deepdesk\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, *kwargs) 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], []

c:\Users\deepdesk\AppData\Local\Programs\Python\Python39\lib\site-packages\openunmix\model.py in forward(self, audio) 256 # getting the STFT of mix: 257 # (nb_samples, nb_channels, nb_bins, nb_frames, 2) --> 258 mix_stft = self.stft(audio) 259 X = self.complexnorm(mix_stft) 260

c:\Users\deepdesk\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, *kwargs) 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], []

c:\Users\deepdesk\AppData\Local\Programs\Python\Python39\lib\site-packages\openunmix\transforms.py in forward(self, x) 97 x = x.view(-1, shape[-1]) 98 ---> 99 complex_stft = torch.stft( 100 x, 101 n_fft=self.n_fft,

c:\Users\deepdesk\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\functional.py in stft(input, n_fft, hop_length, win_length, window, center, pad_mode, normalized, onesided, return_complex) 568 extended_shape = [1] * (3 - signal_dim) + list(input.size()) 569 pad = int(n_fft // 2) --> 570 input = F.pad(input.view(extended_shape), [pad, pad], pad_mode) 571 input = input.view(input.shape[-signal_dim:]) 572 return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]

c:\Users\deepdesk\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\functional.py in _pad(input, pad, mode, value) 4177 if len(pad) == 2 and (input.dim() == 2 or input.dim() == 3): 4178 if mode == "reflect": -> 4179 return torch._C._nn.reflection_pad1d(input, pad) 4180 elif mode == "replicate": 4181 return torch._C._nn.replication_pad1d(input, pad)

RuntimeError: "reflection_pad1d_out_template" not implemented for 'Short'

python 3.9.7 torch 1.10.1+cu113 torchaudio 0.10.1+cu113 torchvision 0.11.2+cu113 cuda 11.7.r11.7

deepakpawade commented 2 years ago

do I need to install cuda 11.3?

faroit commented 2 years ago

@deepakpawade the current master version doesn't support 1.10 yet. The tests still run on torch 1.9. See #112

deepakpawade commented 2 years ago

@deepakpawade the current master version doesn't support 1.10 yet. The tests still run on torch 1.9. See #112

@faroit was having compatibility issues with 1.9.0 + cuda 10.x with other libraries so installed 1.9.1 with cuda 11.1 & still got the same error. Is it strictly dependent on 1.9.0? cuda 11.1 torch 1.9.1+cu111 torchaudio 0.9.1 torchvision 0.10.1+cu111

deepakpawade commented 2 years ago

Also, can we do it in a different way without using separate(...) method or torch?

QinHsiu commented 11 months ago

I have the sample problem, RuntimeError: "reflection_pad1d_out_template" not implemented for 'Long'

faroit commented 4 months ago

@deepakpawade can this be closed?

deepakpawade commented 3 months ago

Yes please. @faroit