Closed enver1323 closed 7 months ago
Thank you so much, @ghkim9213 . I will try to fix the heatmaps to be the same. I have also similar results. I looked into inception v3. The issue is related to index
method in preprocessing step of the model, e.g. a[:, 0]
I tested implementation of original RAP package, and it produces similar random results on resnet18
I am not sure about the reason behind such behavior. Authors test correctness of explanations by comparing sums of prediction logits and relevances. I conducted the same test on this implementation version. The results are as follows:
Pred logit : 31.355219
Relevance Sum : 31.355202
Pred logit : 14.111381
Relevance Sum : 14.111369
Pred logit : 16.852928
Relevance Sum : 16.85293
Pred logit : 20.30917
Relevance Sum : 20.309172
Pred logit : 12.61315
Relevance Sum : 12.613142
One more issue, I found during the test is that Pred logit
and Relevance Sum
values do not match when tensor is batched. However, as it can be seen by the results above, values match, when processed one-by-one.
I will investigate further, just wanted to share the results of my findings.
The last commit fixed an issue with multiple items in a batch
Same issues of the original package on my env. Just for informing.
tench-vgg
tench-resnet18
tench-resnet50
I guess the author failed to process relevance for the residual connection in the torch model. In the paper, they said they use keras model instead of torch one.
https://github.com/wjNam/Relative_Attributing_Propagation/issues/6#issue-1932265018
If you do not mind, I will merge existing work to the main, so that we had functioning RAP for now. We can update it, when all issues a resolved.
For some reason applying relu before adding block improves explanation quality substantially
This PR introduces updated RAP implementation. To make the RAP fully adaptive, attention mechanism and indexing are needed. Other that that, method should work without any issues.