sandylaker / saliency-metrics

An Open-source Framework for Benchmarking Explanation Methods in Computer Vision
https://saliency-metrics.readthedocs.io
MIT License
1 stars 3 forks source link

[Feature Request] Implementation of Sensitivity-N #17

Closed sandylaker closed 2 years ago

sandylaker commented 2 years ago

Implementation of Sensitivity-N

Sensitivity-N randomly sample a subset of $N$ input features, then it compute the PCC between the sum of attributions and the variation of the output of the target class. The implementation consists of the following parts.

SensitivityNResult

The class is like this:

class SensitivityNResult(SerializableResult):

    def __init__(self, summarized: bool = True, num_masks: int = 100, ...) -> None:
        self.summarized = summarized
        self.num_masks = num_masks
        ...

    def dump(self, file_path: str) -> None:
        ...

    def add_single_result(self, single_result: Dict) -> None:
        ...
{
    {
        "n": 10,
        "correlation": [0.1, 0.13, 0.15,],
    },
    {
        "n": 50,
        "correlation": [0.12, 0.14, 0.19,],
    },
}

Otherwise, compute the mean and std of the correlations, and the dumped JSON file is like this:

{
    {"n": 10, "mean_correlation": 0.13, "std_correlation": 0.25},
    {"n": 50, "mean_correlation": 0.16, "std_correlation": 0.45},
}

mmcv.dump can be used to dump the dict to a JSON file.

SensitivityNPerturbation

class SensitivityNPerturbation:

    def __init__(self, n: int,num_masks: int = 100) -> None:
        self._n = n
        self.num_masks = num_masks
        self._masks: Optional[List[Tensor]] = None

    def _generate_random_masks(self, spatial_size: Tuple[int, int], device: Optional[Union[str, torch.device]] = None) -> List[torch.Tensor]:
        masks: List[torch.Tensor] = []
        h, w = spatial_size
        for _ in range(self.num_masks):
            inds = np.random.unravel_index(np.random.choice(h * w, self._n, replace=False), (h, w))
            mask = np.zeros(inds)
            mask[inds] = 1
            masks.append(torch.tensor(mask, dtype=torch.float32, device=device))
        return masks

    def perturb(self, img: torch.Tensor, smap: torch.Tensor) -> Tuple[torch.Tensor, np.ndarray]:
        if self._masks is None:
            spatial_size = img.shape[..., -2:]
            self._masks = self._generate_random_masks(spatial_size, device=img.device)
        ... 

SensitivityN

The class should be like this:

class SensitivityN(ReInferenceMetric):
    def __init__(self, classifier_cfg: Dict, log_n_max: int, log_n_ticks: float, summarized: bool = True,  num_masks: int = 100, ...) -> None:
        self.classifier = build_classifier(classifier_cfg)
        # freeze the model and turn eval mode on
        ...

        self._result: SerializableResult = SensitivityNResult(summarized=summarized)
        self._num_masks = num_masks
        n_list = np.logspace(0, log_n_max, int(log_n_max / log_n_ticks), base=10.0, dtype=int)
        # to eliminate the duplicated elements caused by rounding
        self._n_list = np.unique(n_list)

        self._current_n_ind = 0
        self._perturbation = SensitivityNPerturbation(self._n_list[self_current_n_ind], num_masks=self._num_masks)

    @property
    def num_ns(self):
        return len(self._n_list)

    @property
    def current_n(self):
        return self._n_list[self._current_n_ind]

    def increment_n(self) -> None:
        self._current_n_ind += 1
        self._perturbation = SensitivityNPerturbation(self._n_list[self._current_n_ind], self._num_masks)

    def evaluate(self, img: torch.Tensor, target: torch.Tensor, target: int) -> Dict:
        ...
        # return a single result

    def update(self, single_result: Dict) -> None:
        # update self._result
        ...
sandylaker commented 2 years ago

Duplicated with #16