Sensitivity-N randomly sample a subset of $N$ input features, then it compute the PCC between the sum of attributions and the variation of the output of the target class. The implementation consists of the following parts.
batched_samples: torch.Tensor with shape (num_masks + 1, num_channels, height, width), where the last sample is the unperturbed image.
sum_attributions: np.ndarray with shape (num_masks,). Each element is the sum of attributions of each random masked saliency map. Note that there is no +1 in the shape here.
Implementation of Sensitivity-N
Sensitivity-N randomly sample a subset of $N$ input features, then it compute the PCC between the sum of attributions and the variation of the output of the target class. The implementation consists of the following parts.
SensitivityNResult
The class is like this:
The
single_result
is a dictionary containing the following field(s):n
: (int) The number of features sampled."correlation"
: (float) The Pearson correlation coefficient.numpy.corrcoef
can be used to compute the correlation here.num_masks
controls how many random masks are independently sampled for perturbing each image.If
summarized
isFalse
, then the dumped JSON file is like this:Otherwise, compute the mean and std of the correlations, and the dumped JSON file is like this:
mmcv.dump can be used to dump the dict to a JSON file.
SensitivityNPerturbation
perturb
method returns a tuple consisting of:torch.Tensor
with shape(num_masks + 1, num_channels, height, width)
, where the last sample is the unperturbed image.np.ndarray
with shape(num_masks,)
. Each element is the sum of attributions of each random masked saliency map. Note that there is no+1
in the shape here.SensitivityN
The class should be like this: