MECLabTUDA / M3d-Cam

MIT License
306 stars 40 forks source link

Applying heatmap overlay in 3D #5

Closed bsolano closed 3 years ago

bsolano commented 3 years ago

Hi. Documentation does not say how to apply a 3D heatmap. For instance, I am inputing an MRI (87x75x100) to my model and the gcampp injected returns a heatmap that is (2x3x5). It is not clear how to apply that. Also, if I use ggcamp I obtain an axial result but my input was sagittal. It is not a big issue, it's just a matter of rotating, but it was not intuitive. Therefore I am not sure if the heatmap returned by gcampp is also rotated. I have no clue on how to use it on 3D data. Best regards.

Karol-G commented 3 years ago

Hi,

Documentation does not say how to apply a 3D heatmap. For instance, I am inputing an MRI (87x75x100) to my model and the gcampp injected returns a heatmap that is (2x3x5). It is not clear how to apply that.

Most models use some kind of bottleneck architecture. So the layers in the middle are smaller than the layer at the input/output. If you extract attention maps at the middle of these models the attention maps will be smaller as well. Simply rescale your attention maps if you want to superimpose the 3D attention maps on your input. For 2D this is done automatically, but I did not implement that yet for 3D, so you need to do it yourself.

Also, if I use ggcamp I obtain an axial result but my input was sagittal. It is not a big issue, it's just a matter of rotating, but it was not intuitive.

Hmm, I will have a look into this within the next days.

Best Karol

dxs66 commented 3 years ago

Hi. Documentation does not say how to apply a 3D heatmap. For instance, I am inputing an MRI (87x75x100) to my model and the gcampp injected returns a heatmap that is (2x3x5). It is not clear how to apply that. Also, if I use ggcamp I obtain an axial result but my input was sagittal. It is not a big issue, it's just a matter of rotating, but it was not intuitive. Therefore I am not sure if the heatmap returned by gcampp is also rotated. I have no clue on how to use it on 3D data. Best regards.

Hi, I am using the M3d-Cam to apply a 3D heatmap.But now i met some problems that i can get the output dirs,but there are no 3D heatmap in it. My usage is as follows. image 微信图片_20210426165626 can you send your codes to me to solve this problem? Thank you !

Karol-G commented 3 years ago

Hi,

are you able to generate attention maps with the default inject arguments like this?

model = medcam.inject(model, output_dir="attention_maps", save_maps=True)

If yes, add more of your current arguments successively until the bug occurs. Can you also send me the code were you call the forward of your model?

Best Karol

Karol-G commented 3 years ago

Hi,

ok good to know. Sorry for the confusion, I meant that I wanted the code were you call the forward of your model, like this:

for input_batch in dataloader:
   prediction = model(input_batch)
   ...

Not the code of your forward iself ;)

Best Karol

dxs66 commented 3 years ago

image

Karol-G commented 3 years ago

Ah, I get the problem now. Medcam backups & replaces the models original forward with its own which should be called just "forward". This wont work in your case as your forward is called "forward_global" and "forward_local", so medcam can't find them. Try to rename the forward that you are using to "forward".

bsolano commented 3 years ago

Hello!

Documentation does not say how to apply a 3D heatmap. For instance, I am inputing an MRI (87x75x100) to my model and the gcampp injected returns a heatmap that is (2x3x5). It is not clear how to apply that.

Most models use some kind of bottleneck architecture. So the layers in the middle are smaller than the layer at the input/output. If you extract attention maps at the middle of these models the attention maps will be smaller as well. Simply rescale your attention maps if you want to superimpose the 3D attention maps on your input. For 2D this is done automatically, but I did not implement that yet for 3D, so you need to do it yourself.

I am sorry for this very late response. I was using this code:

class_names = ['CN','EMCI','MCI','LMCI','AD']
model = densenet121(channels=1, num_classes=len(class_names), drop_rate=0.7).to(device)
model = medcam.inject(model, output_dir="attention_maps", backend='ggcam', label=4, save_maps=True, metric='wioa')
model = torch.nn.DataParallel(model).to(device)
model.load_state_dict(torch.load('../../Alzheimer-ResNets/results-6-2-2020-extended/cuda-epoch-109-alzheimer-densenet121.pth'))

According to the documentation layer default value is 'auto'. And the library was saying "Selected module layer: features". Therefore, I believe I was not using a middle layer. Am I right?

Also, if I use ggcamp I obtain an axial result but my input was sagittal. It is not a big issue, it's just a matter of rotating, but it was not intuitive.

Hmm, I will have a look into this within the next days.

Did you look into this?

I appreciate very much your work. Thank you.

Best regards,

Braulio J. Solano-Rojas

nasir3843 commented 2 years ago

Hi Braulio J. Solano-Rojas

Did you manage to solve the problem of overlaying attention maps on the 3D input volume. I am facing the same problem. In my case I get 6x6x6 sized attention maps from the last convolution layer. My input volume to the network is 110x110x110. I am not sure how to overlay these attention maps on the input volume of images. Your help in this regard will be appreciated.

Thank you