Closed shubham-scisar closed 3 years ago
What are your input dimensions?
each image of 512x512 pixels having 3 channels for red,green and blue and annotations of single channel having three classes having pixel values 1,2,3.
Could you please print the size of output3, self.p1(output3) and so on.
Ok. For print(output3.shape,self.p1(output3).shape, self.p2(output3).shape, self.p3(output3).shape, self.p4(output3).shape)
torch.Size([10, 512, 64, 64]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48])
This error is coming irrespective of whatever the size of image is. I tried on the sample images provided in the data directory but same error is coming over there.In case of sample images provided in the data directory, the size is :
torch.Size([10, 512, 64, 64]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48]) torch.Size([10, 128, 48, 48])
If you look at line 137 of PSPDec block, the output size is hard coded to 48. That is why you are getting 48 as tensor size. You need to change the settings based on your input size.
class PSPDec(nn.Module):
'''
Inspired or Adapted from Pyramid Scene Network paper
Link: https://arxiv.org/abs/1612.01105
'''
def __init__(self, nIn, nOut, downSize, upSize=48):
super().__init__()
self.features = nn.Sequential(
nn.AdaptiveAvgPool2d(downSize), # NOTE: we trained our network at fixed size. If you want to train the network at variable size,
#use the below version.
nn.Conv2d(nIn, nOut, 1, bias=False),
nn.BatchNorm2d(nOut, momentum=0.95, eps=1e-3),
nn.ReLU(True),
nn.Upsample(size=upSize, mode='bilinear')
)
def forward(self, x):
return self.features(x)
In our experiments, we used a fixed size as noted in the NOTE in the code itself along with a solution to this issue. It would be good if you can read and understand the code before opening an issue.
class PSPDec(nn.Module):
'''
Inspired or Adapted from Pyramid Scene Network paper
'''
def __init__(self, nIn, nOut, downSize):
super().__init__()
self.scale = downSize
self.features = nn.Sequential(
nn.Conv2d(nIn, nOut, 1, bias=False),
nn.BatchNorm2d(nOut, momentum=0.95, eps=1e-3),
nn.ReLU(True)
)
def forward(self, x):
assert x.dim() == 4
inp_size = x.size()
out_dim1, out_dim2 = int(inp_size[2] * self.scale), int(inp_size[3] * self.scale)
x_down = F.adaptive_avg_pool2d(x, output_size=(out_dim1, out_dim2))
return F.upsample(self.features(x_down), size=(inp_size[2], inp_size[3]), mode='bilinear')
Thanks a lot. Actually, I have been using keras and not familiar with pytorch so got these doubts. Thanks for resolving the issues.
No worries. Some of the issues are obvious and often times, the solution is present/noted in the code. A careful read might help resolve such issues quickly.
Feel free to create new issues if you encounter more errors.
Regarding your dataset labeling, make sure that you are using [0, 1, 2] as class labels instead of [1, 2, 3] for 3-class classification. Like Python, Pytorch is 0-indexed.
Thanks a lot for guidance.
I am getting this error regardless of whatever dimension image ,I am inputting to the network. I got the following error via :
python U:\YNet-master\stage1\main.py --data_dir U:\YNet-master\stage1\data --max_epochs 10 --batch_size 10 --visualizeNet False