YingdaXia / SynthCP

Offical code base for the ECCV oral paper "Synthesize then Compare: Detecting Failures and Anomalies for Semantic Segmentation"
MIT License
61 stars 9 forks source link

Issues with use of HRNet for segmenting novel objects #12

Open ShravanthiPatil opened 3 years ago

ShravanthiPatil commented 3 years ago

Hello, Instead of PSPNet, I am trying to use the HRnet, to detect the novel objects in the input images. I could train the Hrnet on the streethazards dataset, also generated the semantic maps with the HRNet. I have then obtained the reconstructed images from the predicted semantic maps. However, I am facing issues with segmenting novel objects from test set of streethazards, using HRnet. Could you please help me resolve this issue.? I have the following error when I eval_ood_rec.py for HRnet.

File "eval_ood_rec.py", line 315, in main(cfg, args.gpu) File "eval_ood_rec.py", line 250, in main evaluate(segmentation_module, loader_val, loader_rec, cfg, gpu) File "eval_ood_rec.py", line 110, in evaluate scores_tmp, ft_temp = segmentation_module(feed_dict, segSize=segSize) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, *kwargs) File "/scratch/spati12s/SynthCP/anomaly/models/models.py", line 78, in forward pred, ft = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True), segSize=segSize, output_ft=True) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, **kwargs) TypeError: forward() got an unexpected keyword argument 'output_ft'

Does this file under models : hrnet.py, need any specific changes? Kindly suggest!

YingdaXia commented 3 years ago

Hello,

Thanks for the interest.

We modified PSPNet to output the feature (argument output_ft) of the network for anomaly segmentation. If you want to try HRNet, you can modify it accordingly to output the feature map for the computation of cosine distance to segment anomaly objects.

ShravanthiPatil commented 3 years ago

Hello YingdaXia, Thanks a lot for the response and very interesting work! I did modify the file (https://github.com/YingdaXia/SynthCP/blob/master/anomaly/models/hrnet.py), specially the function in line 392, def forward(self, x, return_feature_maps=False):

I set ft = x , and returned both [x], [ft]. I still run in to issues.

Could you please be specific on what exact modifications are required? and in which file? This would help me a lot!

Below is the error: main(cfg, args.gpu) File "test_1.py", line 173, in main evaluate(segmentation_module, loader_val, cfg, gpu) File "test_1.py", line 82, in evaluate scores_tmp = segmentation_module(feed_dict, segSize=segSize) File "/home/ajude2s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, kwargs) File "/scratch/ajude2s/SynthCP/anomaly/models/models.py", line 47, in forward pred = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True), segSize=segSize) File "/home/ajude2s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, *kwargs) File "/scratch/ajude2s/SynthCP/anomaly/models/models.py", line 410, in forward x = self.cbr(conv5) File "/home/ajude2s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, kwargs) File "/home/ajude2s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/home/ajude2s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/home/ajude2s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 343, in forward return self.conv2d_forward(input, self.weight) File "/home/ajude2s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 340, in conv2d_forward self.padding, self.dilation, self.groups) TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not list

kindly support in identifying the issue!

Regards, Shravanthi

YingdaXia commented 3 years ago

I modified L470 in models.py, which is the decoder part of the PSPNet. This segmentation code base sets up an encoder and decoder. So I guess you tried HRNet in config.yaml as an encoder. But you should still specify a decoder and modify forward function in that part accordingly.

ShravanthiPatil commented 3 years ago

Hi YingdaXIa, Thank you pointing out. From the config.yaml for HRNet, the encoder is set as hrnetv2 and the decoder is c1. So, in the file models.py, I modified the function in L395. I set ft = x ( after L407). I am not sure if this is right. I see you have modified L470 , with ft = ppm_out.clone().

With the current modification in models.py, I still have the same error. Am i missing something else? Is the modification appropriate? Kindly suggest.

Error: File "eval_ood_rec.py", line 315, in main(cfg, args.gpu) File "eval_ood_rec.py", line 250, in main evaluate(segmentation_module, loader_val, loader_rec, cfg, gpu) File "eval_ood_rec.py", line 110, in evaluate scores_tmp, ft_temp = segmentation_module(feed_dict, segSize=segSize) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, kwargs) File "/scratch/spati12s/SynthCP/anomaly/models/models.py", line 78, in forward pred, ft = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True), segSize=segSize, output_ft=False) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, *kwargs) File "/scratch/spati12s/SynthCP/anomaly/models/models.py", line 407, in forward x = self.cbr(conv5) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, kwargs) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 343, in forward return self.conv2d_forward(input, self.weight) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 340, in conv2d_forward self.padding, self.dilation, self.groups) TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not list

Regards, Shravanthi

YingdaXia commented 3 years ago

It could be a problem with the interface of your specified encoder and decoder. After enabling the C1 decoder to output the feature map, try:

pred, ft = self.decoder(self.encoder(feed_dict['img_data']), segSize=segSize, output_ft=True)

in models.py

ShravanthiPatil commented 3 years ago

HI YingdaXia, Thank you for quick response! I replaced the L78 pred, ft = self.decoder(self.encoder(feed_dict['img_data'], return_feature_maps=True), segSize=segSize, output_ft=True) ,

with

pred, ft = self.decoder(self.encoder(feed_dict['img_data']), segSize=segSize, output_ft=True)

I still have the same issue!

main(cfg, args.gpu) File "eval_ood_rec.py", line 250, in main evaluate(segmentation_module, loader_val, loader_rec, cfg, gpu) File "eval_ood_rec.py", line 110, in evaluate scores_tmp, ft_temp = segmentation_module(feed_dict, segSize=segSize) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, kwargs) File "/scratch/spati12s/SynthCP/anomaly/models/models.py", line 78, in forward pred, ft = self.decoder(self.encoder(feed_dict['img_data']), segSize=segSize, output_ft=True) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, *kwargs) File "/scratch/spati12s/SynthCP/anomaly/models/models.py", line 410, in forward x = self.cbr(conv5) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, kwargs) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 343, in forward return self.conv2d_forward(input, self.weight) File "/home/spati12s/anaconda3/envs/syth/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 340, in conv2d_forward self.padding, self.dilation, self.groups) TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not list

YingdaXia commented 3 years ago

Hi Shravanthi,

Thank you for your continued interest. Since your configuration is not officially supported, you may have come into a situation where stepping into the function and debugging are necessary. I believe a few lines of modification would work in your case. Good luck with it!

Thanks, Yingda

ShravanthiPatil commented 3 years ago

Hi YingdaXia, Thanks a lot for the support! I will check further. :) Regards, Shravanthi