caogang / wgan-gp

A pytorch implementation of Paper "Improved Training of Wasserstein GANs"
MIT License
1.51k stars 345 forks source link

Issues about running gan_toy.py #6

Closed yscacaca closed 7 years ago

yscacaca commented 7 years ago

Hi,

I'm trying running the gan_toy.py without any modifications. I use the master version of pytorch after commit #1507. However, there are some errors when I am running the code,

`Traceback (most recent call last): File "gan_toy.py", line 270, in gradient_penalty.backward() File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 145, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables) File "/usr/local/lib/python2.7/dist-packages/torch/autograd/init.py", line 98, in backward variables, grad_variables, retain_graph) File "/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py", line 90, in apply return self._forward_cls.backward(self, args) File "/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/linear.py", line 23, in backward grad_input = torch.mm(grad_output, weight) File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 531, in mm return self._static_blas(Addmm, (output, 0, 1, self, matrix), False) File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 524, in _static_blas return cls.apply((args[:1] + args[-2:] + (alpha, beta, inplace))) File "/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/blas.py", line 24, in forward matrix1, matrix2, out=output) TypeError: torch.addmm received an invalid combination of arguments - got (int, torch.cuda.ByteTensor, int, torch.cuda.ByteTensor, torch.cuda.FloatTensor, out=torch.cuda.ByteTensor), but expected one of:

I'm wondering whether you have any ideas about the causes of this problem.

Thanks.

caogang commented 7 years ago

Sorry, this is one of bug existing in pytorch. So I can give you an fix to make this error clear. You can change the source code, and recompile it. This error will be clear.

torch/nn/_functions/thnn/activation.py

         else:
+            mask = input > ctx.threshold
+            grad_input = mask.type_as(grad_output) * grad_output
-            grad_input = grad_output.masked_fill(input > ctx.threshold, 0)
         return grad_input, None, None, None
yscacaca commented 7 years ago

It seems working now. Thanks for your help!