Closed YanWang2014 closed 6 years ago
add: it runs normally when use one GPU
Thank you very much for reporting this bug.
I have fixed the multi-GPU support. It was a small conflict between DataParallel and h,s parameters which were stored as Variable buffers.
I have changed the prototype of forward to make y
an optional argument.
I can also add a dim
option to the constructor however we can’t get rid of the permutation because the FFT requires the channels to be the last dimension of a contiguous tensor.
I am not too keen to add the activation and normalization directly inside the cbp layer because contrary to TensorFlow, none of the layers in torch.nn works that way. I think that it is cleaner to put them in a separate module and plug everything using nn.Sequential.
Great! Thank you!
I modify
to
This makes the compact pooling layer can be plugged to PyTorch CNNs more easily:
However, when I run this using multiple GPUs, I got the following error:
Do you have any ideas?