kazuto1011 / grad-cam-pytorch

PyTorch re-implementation of Grad-CAM (+ vanilla/guided backpropagation, deconvnet, and occlusion sensitivity maps)
MIT License
787 stars 174 forks source link

how can i use my own weight_pth? #30

Closed 1579109909 closed 3 years ago

1579109909 commented 4 years ago

thanks to your work for grad-cam! however how can i use my own weight_pth to product the grad-cam images?

kazuto1011 commented 4 years ago

Please create your own code like demo3.

  1. Determine the layer where you want to visualize activation maps.
    You can see the available layer names from your weights, e.g., print(*your_weights.keys(), sep="\n").
  2. Determine the class id to compute gradients.
  3. Create a GradCAM instance with your model.
  4. Then forward(), backward(), generate().
khaldonminaga commented 4 years ago

Hi, can you expound on number 3 & 4? @kazuto1011

JanineCHEN commented 4 years ago

Thanks a lot for the repo, having same question here, could you kindly elaborate step 3 & 4? @kazuto1011 Thank you in advance.

kazuto1011 commented 4 years ago

Suppose you have the model which returns logit in (num_batch, num_class) shape. Then please create the Grad-CAM instance with the model.

gcam = GradCAM(model=model)

The GradCAM class has three principal method forward, backward, and generate.

First, please call forward with batched images, then you get class probabilities and the corresponding indices in descending order. At the same time, gcam saves the intermediate feature maps.

sorted_probs, sorted_ids = gcam.forward(images)

Here we take indices of the top 1 classes for example.

top1_ids = sorted_ids[:, [0]]

Please call backward with the indices. This line computes and saves the class-specific gradients at all layers.

gcam.backward(ids=top1_ids)

Now, gcam has {feature map, gradient} pairs of all layers. Finally, please call generate with the layer name you want to visualize.

regions = gcam.generate(target_layer='any_layer_names')