step 1: $g{ij}^{kc} = \frac{\partial y^c}{\partial A{ij}^k}$ Same as GradCAM
step 2: $w{ij}^{kc} = relu (g{ij}^{kc})$ Different from GradCAM, but with the parameter gradient_modifier can be implemented with the class GradCAM.
step 3: $\hat{A{ij}^k} = w{ij}^{kc} \cdot A_{ij}^{k}$ Different from GradCAM. GradCAM uses average pooling.
step 4: $M^c = Relu(\sum_k \hat{A^k} )$ Same as GradCAM
First of all, great job with this repository, it has helped me a lot.
In the paper LayerCAM: Exploring Hierarchical Class Activation Maps for Localization the steps for getting the CAM is:
step 1: $g{ij}^{kc} = \frac{\partial y^c}{\partial A{ij}^k}$ Same as GradCAM step 2: $w{ij}^{kc} = relu (g{ij}^{kc})$ Different from GradCAM, but with the parameter
gradient_modifier
can be implemented with the class GradCAM. step 3: $\hat{A{ij}^k} = w{ij}^{kc} \cdot A_{ij}^{k}$ Different from GradCAM. GradCAM uses average pooling. step 4: $M^c = Relu(\sum_k \hat{A^k} )$ Same as GradCAMSo, it is not enough to use the parameter: https://github.com/keisen/tf-keras-vis/blob/0bc00a80c04a66669df8dfd8d1137cdca9b86610/tf_keras_vis/layercam.py#L22
It is necessary to implement the operation of step 3.