wavefrontshaping / complexPyTorch

A high-level toolbox for using complex valued neural networks in PyTorch
MIT License
623 stars 148 forks source link

Error with forward propagation with ComplexBatchNorm1d #17

Closed zrion closed 2 years ago

zrion commented 3 years ago

Hi,

Thanks for the great work. I'm having a problem with ComplexBatchNorm1d layer when doing forward propagation. After passing a tensor of size [-1, 128] to this layer, this returned an error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-54-45169697ed1b> in <module>
     34 
     35             # Forward + backward + optimizer
---> 36             output = model(input_data) # fw
     37             loss = mse(output, reference_data)
     38 

~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

<ipython-input-53-40fbbf908ada> in forward(self, x)
     37         x = complex_relu(x)
     38         print(x.size())
---> 39         x = self.bn1d(x)
     40 
     41         x = self.out(x)

~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

~/.local/lib/python3.8/site-packages/complexPyTorch/complexLayers.py in forward(self, input)
    313 
    314         if self.training and self.track_running_stats:
--> 315                 self.running_covar[:,0] = exponential_average_factor * Crr * n / (n - 1)\
    316                     + (1 - exponential_average_factor) * self.running_covar[:,0]
    317 

RuntimeError: expand(torch.cuda.FloatTensor{[128, 64, 128]}, size=[128]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (3)

I'm not sure how to interpret this error, and what would cause it to happen. Can you help? Thanks!

wavefrontshaping commented 3 years ago

Hi,

I added the use ComplexBatchNorm1d in the Example.ipynb, it seems to work fine. I corrected a bug two weeks ago, are you sure you have the last version?

wavefrontshaping commented 3 years ago

Also, if you want better help, you should provide a minimal version of the code that causes the error. If you have the last version, this error probably means that the shape of your tensor is not right for a 1d batch norm, but again, difficult to say more without the code.

zrion commented 3 years ago

Hi,

Thanks for the response. I just installed the library two days ago via pip. Does it reflect the latest version?

The CNN architecture I'm using:

class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.conv1 = ComplexConv2d(2, 256, (1,1))
        self.bn = ComplexBatchNorm2d(256)
        self.ln = ComplexLinear(256*128, 512)
        self.bn1d = ComplexBatchNorm1d(512)
        self.out = ComplexLinear(512, 1)

    def forward(self, x):  
        x = self.conv1(x)
        x = complex_relu(x)
        x = self.bn(x)

        x = x.view(-1, 256*128)
        x = self.ln(x)
        x = complex_relu(x)
        x = self.bn1d(x)

        x = self.out(x)
        x = x.abs()
        x = F.tanh(x)*1.02

        return x

The input tensor is of size (-1, 2, 1, 128). This worked fine with numerical-valued data (with nn.BatchNorm1d() ).

wavefrontshaping commented 3 years ago

I did not update the release on PyPi before, but I just did (version 0.4). Update the package with pip and tell me if it works.