Closed Shiv1143 closed 10 months ago
Thanks for reporting any issue. Can you add a link that points to the exact line?
sorry I wasn' t aware that someone has replied,.. help me in creating the pull request for this issue
I still do not understand the problem exactly. Can you give us more details and point to the line in the code?
In model.py file in forward function: def forward(self, decoder_features, encoder_features): out = self.conv3x3(decoder_features) out = self.decoder_blocks(out)
if self.training:
out_side = self.side_output(out)
else:
out_side = None
out = self.upsample(out)
if self.encoder_decoder_fusion == 'add':
out += encoder_features
return out, out_side
Here out is not coming correctly in cityscapes so I made this change which I think will work across every weights: def forward(self, decoder_features, encoder_features): out = self.conv3x3(decoder_features) out = self.decoder_blocks(out)
if self.training:
out_side = self.side_output(out)
else:
out_side = None
**out = self.upsample(out)
if(out.shape != encoder_features.shape):
out = out[:,:,:,:encoder_features.shape[3]]**
if self.encoder_decoder_fusion == 'add':
out += encoder_features
return out, out_side
you can verify from your side as well.
and is there a way where we can raise pull request for this issue?
Hmm, ... actually, this should not happen - can you share the the full python command you execute including all arguments as well as some details about your python environment (pip list
, conda list
)?
I generally use Conda enviornment. Actually I run it quite long back so can't remember the command but the issue was that while running cityscapes weights the 3rd dimension isn't aligned with the encoder shape and have more length than expected so I modified to make it work.
I guess, you just picked the wrong context module and/or input resolution - both are different for Cityscapes compared to NYUv2/SUNRGB-D. I will close this issue as we are not able to reproduce this issue (anymore).
While using cityscapes weight,I faced errors with regarding to third dimension of our image related to encoder features in model.py .so I manipulated the third dimension by slicing it to the appropriate dimension to make it work according to our images dimension.can you solve that issue from your end