raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.98k stars 660 forks source link

Problem with keras ResNet50 CAM visualisation #53

Closed alecoutre1 closed 6 years ago

alecoutre1 commented 7 years ago

Hi, First of all, thank you for the great package, very useful and interesting. I wanted to use the pre-trained models provided by keras at (https://keras.io/applications), using the weights from ImageNet. But when using the code of the example notebook attention.ipynb, and only replacing the model by the keras model (and adjusting the image sizes and layer_idx if necessary), it seems that the CAM visualisation doesn't work well with the ResNet50 architecture. It returns a completly blue map. Is there something particular in the architecture of this network that could be problematic with cam visualisation ?

raghakot commented 7 years ago

You need to specify penultimate_layer_idx in this case. By default, it will use the AveragePooling layer which does not have any spatial resolution. Try using the layer above it which has a (7, 7) resolution.

You can search for its layer_idx using.

penultimate_layer_idx = utils.find_layer_name(model, 'the layer name in model summary')

Let me know if that worked for you. Also consider submitting a PR to add that as an example :)

alecoutre1 commented 7 years ago

Thank you for your reply. So I specified the penultimate_layer_idx in order to use the convolutional layer above. It now works with the "vanilla" cam visualisation, but it still returns blue maps for "guided". And I just noticed that the saliency guided visualisastion doesn't work either..

raghakot commented 7 years ago

Hmm. All blue means gradients are all 0s which seems unlikely. Could be a bug. I will try to investigate it.

ahmedhosny commented 7 years ago

Did the same as @alecoutre1 but using a different network - At this point I can confirm the blue zero gradient map when using visualize_cam(backprop_modifier=None)

raghakot commented 7 years ago

@ahmedhosny Perhaps your penultimate layer has a spatial resolution of (1, 1)? Can you give the specifics of network structure?

ahmedhosny commented 7 years ago

ok scratch that - my network is 3d and had other resizing issues (see #54 and #55) backprop_modifier=None is now ok! testing 'guided'

here is my network anyway - using layer dropout_4 as the penultimate

Layer (type)                 Output Shape              Param #   
=================================================================
conv3d_1 (Conv3D)            (None, 46, 46, 46, 64)    8064      
_________________________________________________________________
batch_normalization_1 (Batch (None, 46, 46, 46, 64)    256       
_________________________________________________________________
leaky_re_lu_1 (LeakyReLU)    (None, 46, 46, 46, 64)    0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 46, 46, 46, 64)    0         
_________________________________________________________________
conv3d_2 (Conv3D)            (None, 44, 44, 44, 128)   221312    
_________________________________________________________________
batch_normalization_2 (Batch (None, 44, 44, 44, 128)   512       
_________________________________________________________________
leaky_re_lu_2 (LeakyReLU)    (None, 44, 44, 44, 128)   0         
_________________________________________________________________
max_pooling3d_1 (MaxPooling3 (None, 14, 14, 14, 128)   0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 14, 14, 14, 128)   0         
_________________________________________________________________
conv3d_3 (Conv3D)            (None, 12, 12, 12, 256)   884992    
_________________________________________________________________
batch_normalization_3 (Batch (None, 12, 12, 12, 256)   1024      
_________________________________________________________________
leaky_re_lu_3 (LeakyReLU)    (None, 12, 12, 12, 256)   0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 12, 12, 12, 256)   0         
_________________________________________________________________
conv3d_4 (Conv3D)            (None, 10, 10, 10, 512)   3539456   
_________________________________________________________________
batch_normalization_4 (Batch (None, 10, 10, 10, 512)   2048      
_________________________________________________________________
leaky_re_lu_4 (LeakyReLU)    (None, 10, 10, 10, 512)   0         
_________________________________________________________________
max_pooling3d_2 (MaxPooling3 (None, 3, 3, 3, 512)      0         
_________________________________________________________________
dropout_4 (Dropout)          (None, 3, 3, 3, 512)      0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 13824)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               7078400   
_________________________________________________________________
batch_normalization_5 (Batch (None, 512)               2048      
_________________________________________________________________
leaky_re_lu_5 (LeakyReLU)    (None, 512)               0         
_________________________________________________________________
dropout_5 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 256)               131328    
_________________________________________________________________
batch_normalization_6 (Batch (None, 256)               1024      
_________________________________________________________________
leaky_re_lu_6 (LeakyReLU)    (None, 256)               0         
_________________________________________________________________
dropout_6 (Dropout)          (None, 256)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 2)                 514       
_________________________________________________________________
batch_normalization_7 (Batch (None, 2)                 8         
_________________________________________________________________
activation_1 (Activation)    (None, 2)                 0         
=================================================================
raghakot commented 7 years ago

Cool. If it is not proprietary, would you mind adding an example of 3D visualization? I think it would be nice to have this as an example.

ahmedhosny commented 7 years ago

promise to share once I have them, currently just viewing 2d slices from the 3d volume - its a tough one as the best viz method in this case would be volumetric rendering.

raghakot commented 7 years ago

Thanks :). I will help you out if needed.

raghakot commented 7 years ago

@alecoutre1 I debugged this for half a day now and couldn't find any obvious issues that might cause all blue heatmap. All the gradients are coming out negative which causes relu(heatmap) step to zero out everything. The only suspect at this point is issues with gradient computation under modified backprop. Once i have the theano implementation for this, i would be able to compare the two to see if this is a problem with gradient computation. It might very well be the case that resnet architecture too deep for saliency computation to be meaningful.

I am adding a help wanted tag to see if someone who is an expert at tensorflow can help debug the issue.

samarth-b commented 7 years ago

@raghakot any headway to confirm that the gradients are correct when compared with Theano? I am able to get reasonable visualization with visualize_saliency

raghakot commented 7 years ago

I did not implement theano version yet :( Been super busy lately.

xiaohk commented 6 years ago

I encountered the same problem. For one of my two classes, all results using guided are all blue :( The visualization of the other class if fine tho

keisen commented 6 years ago

Hi, there.

I created PR( #122 ) so please refer to it :-)