Closed cwmeijer closed 2 years ago
From our initial lists I did not see any other method which can replace our DeepLIFT shap which is not gradient-based (needs back-prop and/or the internals of the model). Then I expanded a bit my search and came by 2 methods outside our lists with pluses and minuses:
Method | Paper(s) | Implementation(s) | Advantages | Disadvantages |
---|---|---|---|---|
Occlusion | Visualizing and Understanding Convolutional Networks, 2014 | In Captum (Pytorch); In DeepExplain (TF, Keras with TF as backend) | Pointed as one of 3 most interpretable XAI methods by a very recent remote sensing paper | Old; Very similar to RISE and RISE claims better; usually used for images; DeepExplain limitation: "Only Tensorflow V1 is supported. For V2, there is an open pull-request, that works if eager execution is disabled." |
Contextual Importance and Utility (CIU) | The method's paper: Explainable AI without Interpretable Model, 2020; The author's Python implementation paper: Py-CIU: A Python Library for Explaining Machine Learning Predictions Using Contextual Importance and Utility, 2020; The R implementation papers: general: Contextual Importance and Utility in R: the 'ciu' Package and for images: ciu.image: An R Package for Explaining Image Classification with Contextual Importance and Utility; Latest journal paper for image classification, 2021 Context-based image explanations for deep neural networks | Py-CIU - python; ciu - R; ciu.image - R | No need for interpretative model to explain, hence different to LIME, LRP etc.; Latest publications are quite new, but the professor has been publishing on this since 1995 and very simple | Python library for now supports only tabular data; R package is only for images |
Doesn't seem to be having (big) user community, but it has been cited by review papers, e.g. Benchmarking and Survey of Explanation Methods for Black Box Models (then only for tabular data)!
Conclusion: interesting, but easiest for tabular data (in R and python). Version for images exists, but in R. Still not sure if applicable for text. Kind of "orthogonal" to the other methods, do not need model internals, nor interpretable surrogate! We might need to implement almost from scratch, formulas do not seem difficult, determining feature ranges is the challenge.
The list of methods we have compiled and evaluated and the initial larger list of XAI methods.