-
Thanks for sharing the code.
How to get background images and saliency maps.
could you share the two processed data qbposter_train_instruct.json, qbposter_val_instruct.json
-
Congratulations on your work and I am very interested in your paper. I am a novice and I would like to follow the tutorial to reproduce your tutorial. When working on task-specific student model disti…
-
### Problem Description
I've realized that SHAP doesn't have an implementation for Saliency Maps. Even though image_plots lets us compute the Shapley values for an image, it doesn't provide an attent…
-
I have 606 protein sequences from uniprot. I tried running `TSignal` as follows and ran into the following issue @Dumitrescu-Alexandru
```
python main.py --test_seqs uniprot_all_true_forSignalPep…
-
I am naive in python and trying to produce the saliency maps for some other images.
But I am not able to understand the command given for predicting saliency maps.
THEANO_FLAGS=mode=FAST_RUN,device=…
-
Hi, thanks for the great work!
When i set `results = saliency.get_image_saliency_results(model, img, input_tensor, prediction_tensor), methods=['gradcam', 'smooth_grad']` in [`cnn_pred_explain.ipyn…
-
To retrain these models with a custom set of images, it seems two images are required, one the original actual image and the other that is the masked image that identifies the salient features.
How…
-
# Context
So, for some context for this feature request:
I'm a university student in a research project related to Multi-modal saliency models. In this case, we need to analyze data and compare …
-
Your work has given me great inspiration. Thank you very much for that.
In the result phase, how do you superimpose the generated salient map with the original one, as shown in your README.md? Do y…
-
Hi,
Currently the saliency method is not working for images with a single channel (ex (1,244,244)). At some point the code seems to require three channels. This results in shape errors, etc.
Cou…