Open IdoWSC opened 3 years ago
it seems not all tensors are on the same device, some on the gpu and some on the cpu. can you please upload the command you used for launching the experiment so we can reproduce the error? also, can you please make sure the target supervision (clean signal) is also copied to the gpu?
Hi @IdoWSC I also encountered similar problem. I found the main reason causing this error is that both "x" and "window" should be two tensors on the same device. So possible solutions are two:
return _VF.stft(input, n_fft, hop_length, win_length, window.to(input.device), normalized, onesided, return_complex)
Hey @IdoWSC and @chadHGY , can you try again after pulling from master? This should be fixed now.
Hey @IdoWSC and @chadHGY , can you try again after pulling from master? This should be fixed now.
Hi Alexandre, I'm trying to reproduce the results and facing the same error. My environment is Torch 1.7.1 + CUDA11.0. I have tried the method suggested by @chadHGY.
modifying to return _VF.stft(input, n_fft, hop_length, win_length, window.to(input.device), normalized, onesided, return_complex)
, and the error was eliminated.
As indicated previously, doing the following in the forward method of the STFTLoss class, the problem is solved:
x_mag = stft(x, self.fft_size, self.shift_size, self.win_length, self.window.to(x.device))
y_mag = stft(y, self.fft_size, self.shift_size, self.win_length, self.window.to(x.device))
Hi, When fine-tuning on a gpu machine and setting the STFT loss to true in the config file I get an error:
Any idea why is it happening?
Thanks in advance!