Generator created
DataLoader created
Test samples loaded
Starting Training...
Traceback (most recent call last):
File "model.py", line 396, in <module>
outputs = discriminator(batch_pairs_var, ref_batch_var) # output : [40 x 1 x 8]
File "/home/xxxxx/seganPyTorch-env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/xxxxx/seganPyTorch-env/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 66, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/xxxxx/seganPyTorch-env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "model.py", line 109, in forward
ref_x = self.conv1(ref_x)
File "/home/xxxxx/seganPyTorch-env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/xxxxx/seganPyTorch-env/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 166, in forward
self.padding, self.dilation, self.groups)
File "/home/xxxxx/seganPyTorch-env/lib/python3.5/site-packages/torch/nn/functional.py", line 54, in conv1d
return f(input, weight, bias)
RuntimeError: tensors are on different GPUs
I'm getting this error with:
Any idea how can this be solved?