Closed krishanr closed 1 year ago
Hi Krishan,
What is the shape of the final output layer?
Using concatenate_depth 'layer4' the last conv1x1 layer is Conv2d(4096, 2048, kernel_size=(1, 1), stride=(1, 1))
. Also for an input tensor of shape (1, 4, 256, 256)
(initalized with torch.randn
), the models output shape is [1, 2, 256, 256]
.
I remember I had this issue with the pretrained_unet model when I turned off the normalization. One possibility is to reincorporate the normalization for the 4 band imagery? Since according to the deeplabv3 documentation the image should be normalized.
For debugging purposes, I will suggest using unet_pretrained and skip concatenation involved with deeplabv3.
Ongoing discussions to remove deeplabv3 with NIR injection. See #218
Hi all,
I have two issues to report with running working with 4band 16 bit imagery in version 6852729 using a config file similar to https://github.com/NRCan/geo-deep-learning/blob/v.1.2.0/conf/development/config_test_4channels_implementation.yaml.
All the steps up to inference.py can be run without error, but inference.py reports the following error:
The above error is due to the fact that the LayersEnsemble model is using concatenation point 'conv1' when the model was built with 'layer4', which is in the yaml configuration file. This can be fixed modifying line 496 in inference.py like so:
Once the above line is added to inference.py, the inferences are generated appropriately.
However one last issue remains, because there is no normalization used for the images. The model then starts to learn the ignored class (with value -1) in addition to the target class (here we're doing binary segmentation). Any ideas on how to prevent the model from learning the ignored class?