Open DocandBean opened 2 years ago
Creating a transfer learning model using Keras.Applications yields a model.summary() such as:
model.summary()
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 160, 160, 3)] 0 _________________________________________________________________ sequential (Sequential) (None, 160, 160, 3) 0 _________________________________________________________________ tf.math.truediv (TFOpLambda) (None, 160, 160, 3) 0 _________________________________________________________________ tf.math.subtract (TFOpLambda (None, 160, 160, 3) 0 _________________________________________________________________ mobilenetv2_1.00_160 (Functi (None, 5, 5, 1280) 2257984 _________________________________________________________________ global_average_pooling2d (Gl (None, 1280) 0 _________________________________________________________________ dropout (Dropout) (None, 1280) 0 _________________________________________________________________ dense (Dense) (None, 1) 1281 ================================================================= Total params: 2,259,265 Trainable params: 1,281 Non-trainable params: 2,257,984 _________________________________________________________________
Note the Functional layer mobilenetv2_1.00_160 which hides the underlying base_model.
VizGradCAM fails to find the last convolutional layer as it doesn't "dive into" the Functional base_model.
I have the same problem.
Creating a transfer learning model using Keras.Applications yields a
model.summary()
such as:Note the Functional layer mobilenetv2_1.00_160 which hides the underlying base_model.
VizGradCAM fails to find the last convolutional layer as it doesn't "dive into" the Functional base_model.