Major model refactoring alongside the integration of class activation maps.
The training script has an additional parameter -m pointing to the ID of the model to be used. Currently, only 2 IDs are supported: vgg_base yields the architecture of the adapted VGG model that was used in our paper (including batch norm etc.) and vgg_cam yields the plain, pretrained VGG with a single softmax layer after the flattening after the last convolution. The second config is compatible to get class activation maps without the need to compute gradients. During development, my feeling was that the CAMs of this model are more reliable and reasonable than the ones from the vgg_base where nonlinearities apply after the last head layer.
Evaluator class was complemented to deal the different model keys we have.
get_class_activation_map is available via from pocovidnet.cam import get_class_activation_map. Needs to be called with a compatiblekeras model, i.e. a model with only one layer with a softmax activation after the flattening. Also receives an image as 3D np.array, a class_id for the CAM. Optional parameter are zeroing (controls the threshold under which values of the CAM are set to zero and won't show on the heatmap, and heatmap_weight showing the additive weight of the heatmap during image overlay.
The full, gradient based analog, GradCAM is available via from pocovidnet.grad_cam import GradCAM. Objects need to be instantiatied as explainer = GradCAM() and then gradient CAMs can be retrieved via explainer.explain(image, model, class_index). Optional argument layer_name corresponds to the name of the desired layer to compute the CAM (per default the last conv layer will be retrieved automatically). Optional parameters as above.
Major model refactoring alongside the integration of class activation maps.
The training script has an additional parameter
-m
pointing to the ID of the model to be used. Currently, only 2 IDs are supported:vgg_base
yields the architecture of the adapted VGG model that was used in our paper (including batch norm etc.) andvgg_cam
yields the plain, pretrained VGG with a single softmax layer after the flattening after the last convolution. The second config is compatible to get class activation maps without the need to compute gradients. During development, my feeling was that the CAMs of this model are more reliable and reasonable than the ones from thevgg_base
where nonlinearities apply after the last head layer.Evaluator
class was complemented to deal the different model keys we have.get_class_activation_map
is available viafrom pocovidnet.cam import get_class_activation_map
. Needs to be called with a compatiblekeras
model, i.e. a model with only one layer with a softmax activation after the flattening. Also receives an image as 3Dnp.array
, a class_id for the CAM. Optional parameter arezeroing
(controls the threshold under which values of the CAM are set to zero and won't show on the heatmap, andheatmap_weight
showing the additive weight of the heatmap during image overlay.The full, gradient based analog,
GradCAM
is available viafrom pocovidnet.grad_cam import GradCAM
. Objects need to be instantiatied asexplainer = GradCAM()
and then gradient CAMs can be retrieved viaexplainer.explain(image, model, class_index)
. Optional argumentlayer_name
corresponds to the name of the desired layer to compute the CAM (per default the last conv layer will be retrieved automatically). Optional parameters as above.Example image of
GradCAM
Example image of
CAM
: