sacmehta / YNet

Y-Net: Joint Segmentation and Classification for Diagnosis of Breast Biopsy Images
https://sacmehta.github.io/YNet/
MIT License
136 stars 33 forks source link

Size of tensors must match except in dimension 2. #11

Closed shubham-scisar closed 3 years ago

shubham-scisar commented 3 years ago

I am getting this error regardless of whatever dimension image ,I am inputting to the network. I got the following error via :

python U:\YNet-master\stage1\main.py --data_dir U:\YNet-master\stage1\data --max_epochs 10 --batch_size 10 --visualizeNet False

Traceback (most recent call last):
  File "U:\YNet-master\stage1\main.py", line 317, in <module>
    trainValidateSegmentation(args)
  File "U:\YNet-master\stage1\main.py", line 246, in trainValidateSegmentation
    train(args, trainLoaderWithZoom, model, criteria, optimizer, epoch)
  File "U:\YNet-master\stage1\main.py", line 86, in train
    output = model(input_var)
  File "C:\Users\SHUBHAM\Anaconda3\envs\sacmynet\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "U:\YNet-master\stage1\Model.py", line 268, in forward
    torch.cat([output3, self.p1(output3), self.p2(output3), self.p3(output3), self.p4(output3)], 1))
RuntimeError: Sizes of tensors must match except in dimension 2. Got 48 and 64 (The offending index is 0)
sacmehta commented 3 years ago

What are your input dimensions?

shubham-scisar commented 3 years ago

each image of 512x512 pixels having 3 channels for red,green and blue and annotations of single channel having three classes having pixel values 1,2,3.

sacmehta commented 3 years ago

Could you please print the size of output3, self.p1(output3) and so on.

shubham-scisar commented 3 years ago

Ok. For print(output3.shape,self.p1(output3).shape, self.p2(output3).shape, self.p3(output3).shape, self.p4(output3).shape)

torch.Size([10, 512, 64, 64]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48])

shubham-scisar commented 3 years ago

This error is coming irrespective of whatever the size of image is. I tried on the sample images provided in the data directory but same error is coming over there.In case of sample images provided in the data directory, the size is :

torch.Size([10, 512, 64, 64]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48])

sacmehta commented 3 years ago

If you look at line 137 of PSPDec block, the output size is hard coded to 48. That is why you are getting 48 as tensor size. You need to change the settings based on your input size.

class PSPDec(nn.Module):
    '''
    Inspired or Adapted from Pyramid Scene Network paper
    Link: https://arxiv.org/abs/1612.01105
    '''
    def __init__(self, nIn, nOut, downSize, upSize=48):
        super().__init__()
        self.features = nn.Sequential(
            nn.AdaptiveAvgPool2d(downSize), # NOTE: we trained our network at fixed size. If you want to train the network at variable size,
                                            #use the below version.
            nn.Conv2d(nIn, nOut, 1, bias=False),
            nn.BatchNorm2d(nOut, momentum=0.95, eps=1e-3),
            nn.ReLU(True),
            nn.Upsample(size=upSize, mode='bilinear')
        )

    def forward(self, x):
        return self.features(x)
sacmehta commented 3 years ago

In our experiments, we used a fixed size as noted in the NOTE in the code itself along with a solution to this issue. It would be good if you can read and understand the code before opening an issue.

sacmehta commented 3 years ago
class PSPDec(nn.Module):
     '''
     Inspired or Adapted from Pyramid Scene Network paper
     '''

     def __init__(self, nIn, nOut, downSize):
         super().__init__()
         self.scale = downSize
         self.features = nn.Sequential(
             nn.Conv2d(nIn, nOut, 1, bias=False),
             nn.BatchNorm2d(nOut, momentum=0.95, eps=1e-3),
             nn.ReLU(True)
         )

     def forward(self, x):
         assert x.dim() == 4
         inp_size = x.size()
         out_dim1, out_dim2 = int(inp_size[2] * self.scale), int(inp_size[3] * self.scale)
         x_down = F.adaptive_avg_pool2d(x, output_size=(out_dim1, out_dim2))
         return F.upsample(self.features(x_down), size=(inp_size[2], inp_size[3]), mode='bilinear')
shubham-scisar commented 3 years ago

Thanks a lot. Actually, I have been using keras and not familiar with pytorch so got these doubts. Thanks for resolving the issues.

sacmehta commented 3 years ago

No worries. Some of the issues are obvious and often times, the solution is present/noted in the code. A careful read might help resolve such issues quickly.

Feel free to create new issues if you encounter more errors.

Regarding your dataset labeling, make sure that you are using [0, 1, 2] as class labels instead of [1, 2, 3] for 3-class classification. Like Python, Pytorch is 0-indexed.

shubham-scisar commented 3 years ago

Thanks a lot for guidance.