Instead of computing the influence of a point using influence functions, we propose to do so by measuring the distance of each data-point to the decision boundary. This can be done in two different ways: by using deep-fool to find the closest adversarial example, or by finding the minimum norm delta in the model's weights so the sample is classified differently.
Their performance is lesser than the other methods in the library, but we have found them to be more robust to hyperparameter changes.
LiSSA
In Understanding Black-box Predictions via Influence Functions, the authors propose to use an iterative procedure for computing IHVPs: LiSSA. We have added a TF-optimized version of this algorithm to our other two IHVP calculators, that can be used when they don't perform well enough.
I've adressed all the remarks except for the conditionals and the variable names in the ArnoldiInfluenceCalculator class (as it does follow the paper's notation). Thanks for the thorough review!
Boundary-based methods
Instead of computing the influence of a point using influence functions, we propose to do so by measuring the distance of each data-point to the decision boundary. This can be done in two different ways: by using deep-fool to find the closest adversarial example, or by finding the minimum norm delta in the model's weights so the sample is classified differently.
Their performance is lesser than the other methods in the library, but we have found them to be more robust to hyperparameter changes.
LiSSA
In Understanding Black-box Predictions via Influence Functions, the authors propose to use an iterative procedure for computing IHVPs: LiSSA. We have added a TF-optimized version of this algorithm to our other two IHVP calculators, that can be used when they don't perform well enough.
Arnoldi
We implemented the efficient approximation for influence scores proposed in Scaling Up Influence Functions.
All these methods were also included in the benchmarking module for easy testing.