-
Hello,
I have a question regarding the saliency maps in your paper: [Figure 3](https://www.nature.com/articles/s41467-024-47744-0/figures/3). Could you clarify what the dark and light lines in the …
-
Hi! How did you calculate the corresponding salicency map for every input?
-
### Problem Description
I've realized that SHAP doesn't have an implementation for Saliency Maps. Even though image_plots lets us compute the Shapley values for an image, it doesn't provide an attent…
-
Thanks for sharing the code.
How to get background images and saliency maps.
could you share the two processed data qbposter_train_instruct.json, qbposter_val_instruct.json
lgs00 updated
3 weeks ago
-
It would be great if Model API models had a method similar to the `postprocess`, that would generate a saliency map for models in any domain. Currently, we need to keep that logic on the GETi side. It…
-
Dear DeepFRI developers,
There seems to be an error in the saliency maps when the latest release and the downloadable models are used. For example in the case of 2PE5 in the Nat. Comms paper for the…
ga01 updated
3 months ago
-
Hi!
This is a very nice work. And I have a question. Could you please tell me how you drew the saliency maps in the paper? If possible, please provide some details. Thanks!
-
Hi,
Why do we need saliency maps to train the model?
Where can we find the saliency maps for ImageNet dataset?
Thanks
-
I could not find a way to adapt CNNSaliencyMap (Deep Inside Convolutional Networks) to give the outputs on PoseNet. The main problem is the multiple outputs, and there is not much information availabl…
-
Hi,
Is it possible to upload the predicted saliency maps into Google Drive?
I can't download them.