MECLabTUDA / M3d-Cam

MIT License
306 stars 40 forks source link

The problem when i run gradcam.py in segmentation task #3

Closed Yufengevan closed 3 years ago

Yufengevan commented 3 years ago

First of all thanks for your code. I refer to your unet segmentation example, my setting is

model = Network(channel=32, n_class=1)
model = torch.nn.DataParallel(model).cuda() 
model.load_state_dict(torch.load(opt.pth_path))

current_path ='/home/'
# model = medcam.inject(model, label=2, replace=True, backend="gcam", layer='module.attn_conv4')
model = medcam.inject(model, output_dir=os.path.join(current_path, 'results/unet_seg/gcam'),
                      backend='gcam', layer='module.attn_conv4',
                      evaluate=True, save_scores=False, save_maps=True, save_pickle=False, metric='wioa')

but when I run the test.py , there Is an error issue come:

File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/medcam_inject.py", line 199, in forward self.test_run(batch, internal=True) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/medcam_inject.py", line 229, in testrun = self.medcam_dict['model_backend'].generate_attention_map(batch, None) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/backends/base.py", line 21, in generate_attention_map output = self.forward(batch) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/backends/grad_cam.py", line 109, in forward return super(GradCAM, self).forward(data) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/backends/base.py", line 32, in forward self._extract_metadata(batch, self.logits) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/backends/base.py", line 75, in _extract_metadata self.output_batch_size = output.shape[0] AttributeError: 'tuple' object has no attribute 'shape'

In self.output_batch_size = output.shape[0], the output comes from self.logits = self.model.model_forward(batch) I have no idea with what the kind of self.logist should be

Karol-G commented 3 years ago

Hi,

can you set evaluate=False, remove metric='wioa' and then tell me what happens? Furthermore which network do you use and what is the shape of your input data? Please also show me the code that includes the forward pass.

Best Karol

Yufengevan commented 3 years ago

Thank you for your reply! The problem i mentioned befored have been solved, it may be caused by the model have more than one output, and changing the output of model to one the problem disappeared. However, the new problem happened. The information are: Traceback (most recent call last): File "/home/cnu_nyfMyTest_gradCam.py", line 96, in inference() File "/home/cnu_nyf/MyTest_gradCam.py", line 80, in inference result = model(image) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/medcam_inject.py", line 199, in forward self.test_run(batch, internal=True) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/medcam_inject.py", line 229, in testrun = self.medcam_dict['model_backend'].generate_attention_map(batch, None) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/backends/base.py", line 23, in generate_attention_map attention_map = self.generate() File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/backends/grad_cam.py", line 126, in generate attention_map = self._generate_helper(fmaps, grads, layer) File "/home/cnu_nyf/anaconda3/envs/inf/lib/python3.6/site-packages/medcam-0.1.6-py3.6.egg/medcam/backends/grad_cam.py", line 182, in _generate_helper attention_map = torch.mul(fmaps, weights) RuntimeError: The size of tensor a (256) must match the size of tensor b (64) at non-singleton dimension 1

My model is a variation of resnet50, and the size of input images is (1,3,352,352). I have tried the measures you mentiond, it dosen't work and still make this error.

Karol-G commented 3 years ago

Hi again,

so you have a new error now? Please post a snippet of the code you are using so I can get a better insight on the problem.

Best Karol