kan-bayashi / ParallelWaveGAN

Unofficial Parallel WaveGAN (+ MelGAN & Multi-band MelGAN & HiFi-GAN & StyleMelGAN) with Pytorch
https://kan-bayashi.github.io/ParallelWaveGAN/
MIT License
1.57k stars 343 forks source link

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! #223

Closed ghost closed 4 years ago

ghost commented 4 years ago

I made my own recipe but I got this error when I started training. I have no idea how this error occurred because theoretically it shouldn't depend on which recipe I use. I don't even know which variable is not on GPU.

sc_loss, mag_loss = self.criterion["stft"](y_.squeeze(1), y.squeeze(1)) both y_ and y are on GPU. sc_l, mag_l = f(x, y) both x and y are on GPU. x_stft = torch.stft(x, fft_size, hop_size, win_length, window) x is on GPU.

[train]: 0%| | 0/400000 [00:00<?, ?it/s]/home/train/.local/lib/python3.7/site-packages/torch/functional.py:516: UserWarning: stft will require the return_complex parameter be explicitly specified in a future PyTorch release. Use return_complex=False to preserve the current behavior or return_complex=True to return a complex output. (Triggered internally at /pytorch/aten/src/ATen/native/SpectralOps.cpp:653.) normalized, onesided, return_complex) Traceback (most recent call last): File "/home/train/.local/bin/parallel-wavegan-train", line 11, in load_entry_point('parallel-wavegan', 'console_scripts', 'parallel-wavegan-train')() File "/home/train/ParallelWaveGAN/parallel_wavegan/bin/train.py", line 921, in main trainer.run() File "/home/train/ParallelWaveGAN/parallel_wavegan/bin/train.py", line 91, in run self._train_epoch() File "/home/train/ParallelWaveGAN/parallel_wavegan/bin/train.py", line 291, in _train_epoch self._train_step(batch) File "/home/train/ParallelWaveGAN/parallel_wavegan/bin/train.py", line 175, in _train_step sc_loss, magloss = self.criterion["stft"](y.squeeze(1), y.squeeze(1)) File "/home/train/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "/home/train/ParallelWaveGAN/parallel_wavegan/losses/stft_loss.py", line 147, in forward sc_l, mag_l = f(x, y) File "/home/train/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(input, **kwargs) File "/home/train/ParallelWaveGAN/parallel_wavegan/losses/stft_loss.py", line 101, in forward x_mag = stft(x, self.fft_size, self.shift_size, self.win_length, self.window) File "/home/train/ParallelWaveGAN/parallel_wavegan/losses/stft_loss.py", line 26, in stft x_stft = torch.stft(x, fft_size, hop_size, win_length, window) File "/home/train/.local/lib/python3.7/site-packages/torch/functional.py", line 516, in stft normalized, onesided, return_complex) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

kan-bayashi commented 4 years ago

Maybe you use torch==1.7. Not yet tested. For quick fixing, please use torch<=1.6.

ghost commented 4 years ago

Maybe you use torch==1.7. Not yet tested. For quick fixing, please use torch<=1.6.

Resolved by using torch==1.6.0. Thank you!

kan-bayashi commented 4 years ago

Fixed in #225