nareshshah139 / Interpretable_Papers

Interpretable AI papers
0 stars 0 forks source link

Local and global model interpretability via backward selection and clustering #8

Open nareshshah139 opened 5 years ago

nareshshah139 commented 5 years ago

Abstract: Local explanation frameworks aim to rationalize particular decisions made by a black-box prediction model. Existing techniques are often restricted to a specific type of predictor or based on input saliency, which may be undesirably sensitive to factors unrelated to the model's decision making process. We instead propose sufficient input subsets that identify minimal subsets of features whose observed values alone suffice for the same decision to be reached, even if all other input feature values are missing. General principles that globally govern a model's decision-making can also be revealed by searching for clusters of such input patterns across many data points. Our approach is conceptually straightforward, entirely model-agnostic, simply implemented using instance-wise backward selection, and able to produce more concise rationales than existing techniques. We demonstrate the utility of our interpretation method on neural network models trained on text and image data.

nareshshah139 commented 5 years ago

Brandon Carter, Jonas Mueller, Siddhartha Jain, David Gifford