Closed YuhengHuang42 closed 3 years ago
Hi, @hyhzxhy, thanks for your implementation. It should be possible to do a mini-batch.
Feel free to pull a request as a new file (for example, socrecam_batch.py), I will check it later.
I will close this issue as I can see there is another implementation which is potentially faster than this one.
Dear author:
First of all, thanks for your great work!
When I check the code of scorecam.py, I notice that it computes the score_saliency_map for every single instance. This is sometimes inefficient when you want to compute the score-cam for a number of instances.
I also read your paper and find that for your algorithm it is in fact possible to compute score-cam for a mini-batch (correct me if I am wrong). This can be more efficient than computing for a single instance.
However, to do the mini-batch computations, some of the codes need to be modified:
I saw in ScoreCAM, you did
score.backward(retain_graph=retain_graph)
. But according to my understanding, Score-CAM is gradient-free so the backward computation is in fact useless. We need to remove the backward process before we can do computation for mini-batch input.In ScoreCAM you skip the computation whenever
saliency_map.max() == saliency_map.min()
. This logic needs to be implemented for mini-batch computation as well.I will leave my codes here. I have tested for a few instances and did not find any problems. This implementation will spend around 41 second for 32 instances on my server. And computing for one instance will spend around 16 second. So there is improvement for the efficiency problem.
As I am not sure whether the codes are correct, I will leave them below. You can check them when you are free.