dais-ita / interpretability-papers

Papers on interpretable deep learning, for review
27 stars 2 forks source link

Understanding black-box predictions via influence functions #36

Open richardtomsett opened 6 years ago

richardtomsett commented 6 years ago

Understanding black-box predictions via influence functions How can we explain the predictions of a black-box model? In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks.

Bibtex:

@misc{1703.04730, Author = {Pang Wei Koh and Percy Liang}, Title = {Understanding Black-box Predictions via Influence Functions}, Year = {2017}, Eprint = {arXiv:1703.04730}, }

richardtomsett commented 6 years ago

From previous review: Koh and Liang (2017) propose a method to investigate a model from the point of view of its training data. They do this by asking how a model’s predictions would differ if a particular data point were altered, or not seen during training at all. They use a scaled-up derivation of statistical influence functions to approximate the effects of changing every training point without having to fully retrain the model. Their method provides a way of assessing the importance of particular training points on the classification of a test point, allowing the model-builder to find training points that contribute most to classification errors. This reveals how outliers can dominate learned model parameters, and potentially indicate mis-labeled training data. Additionally, they show it is possible to generate “adversarial” training images (images that are modified with noise such that the modification is imperceptible to a human, but results in a degradation in model performance). Prior to this work, adversarial examples had only been considered as inputs engineered to cause already-trained models to misclassify them [potentially need refs to adversarial literature here]; Koh and Liang show that classifiers can also be attacked through specially engineered training data.