-
```
image 1/1 data/samples/Jujube_4501.bmp: Model Summary: 222 layers, 6.1556e+07 parameters, 6.1556e+07 gradients
Traceback (most recent call last):
File "/home/zxzn/YOLOv3-GradCAM/gradcam/gradc…
-
visualize_cam output array shape is not the same of input array shape. It gives me (224,3,3) as an output for (224,224,3) input shape. Any help ? Thanks in advanced.
-
In my use case, I only need to generate cams for one specific class (I am classifying hole-free and hole-present images). I am wondering if it is possible to generate cams along with actual model outp…
-
Thank you for your interesting work and would definitely agree that poly-cam is the new state-of-the-art explainability method. I have been using it for my thesis about Decoding the Art of Robot Tacti…
-
Hi Insik
Firstly, thanks for implementing Grad-Cam.
When I played around with the Resnet50 notebook
I noticed that the prediction results change if you change the order of the images (img1,img2,i…
-
嗨,我尝试运行RetinaNet示例代码。 该程序开始运行,但不久后失败。 那是最后几行:
Hi, I tried to run the RetinaNet sample code. The program started to run, but failed shortly after. Those are the last few lines:
```python
feature s…
-
Hi, @darkwrath @tataiani @prantikbubun @adityac94
As a result of classifying with Resnet, Accuarcy is over 99%. If you hit map the object area with gradCAM with that model file, it does not match …
-
if I have a semantic segmentation model:
input1 : (B, 3, H, W)
input2 : (B, 3, H, W)
output1, ouput2 = model(input1, input2)
output1 : (B, C, H, W)
output2 : (B, C, H, W)
…
-
Hi,
I wanted to ask how to visualize the attention masks as given in Fig 1 in the paper? Does it involve using GRAD CAM? or is it directly the actual outputs of the mask? Also, given that the masks …
-
Hi all,
I am working with a feature extractor (Inception V3, VGG16, whatever) plus an LSTM for sequences classification (let's say 3 sec. each one). There's any way to use GradCam in order to obtain…