-
Hi @Sibozhu
I am using a custom trained Tiny YOLOv3 model with 5 classes for GradCAM.
I noticed the following:
![image](https://user-images.githubusercontent.com/57705684/106568374-3758e700-65…
-
Hi,
I have implemented text prompt-controlled segmentation using selective search and CLIP. Can you suggest any additional techniques I can include? I am considering trying CLIP-GradCAM #4
h…
-
## 🚀 Feature
List of low-priority features which would be nice to have
- [ ] MMOCR implementation
- [ ] Add GradCAM support: [here](https://github.com/jacobgil/pytorch-grad-cam) and Florian Kats…
-
Currently, encoder-decoder models lack support for Grad-CAM (Gradient-weighted Class Activation Mapping) visualization with cross-attention mechanisms. Grad-CAM is a valuable tool for interpreting mo…
-
Nice Work. In the code you get the cam using only one Conv2d layer. According to my intuition, here should not use a method like GradCAM to get CAM. And I also see this process in other place. Can you…
-
Hello, I am trying to apply the gradcam inspired by your code to my resnet3D model.
I keep getting error while calculating the Gradients in the following line :
conv_outputs, predictions = grad_…
-
Thank you for this useful visualization package!
Right now I have a two-input, one-output model as follows
![image](https://user-images.githubusercontent.com/117333925/208346985-95daa624-ba9b-4d34…
-
1.Public code and paper link:
I have installed the following code: [https://github.com/AILab-CVC/GroupMixFormer](url)
paper link : [https://arxiv.org/abs/2311.15157](url)
2. What does this work d…
-
#9 で構築したモデルがどの部分に注目してレア度を判断しているのかを可視化する。
カードの縁の部分を注目していることがちゃんとわかればいい感じ。
-
### Highlevel Tasks Summary
1. All requests are in JSON not form-data except for file upload
2. All responses has type to indicate in which data type we get the response
3. List of possible types a…