Shapley values are a theoretically grounded model explanation approach, but their exponential computational cost makes them difficult to use with large deep learning models. This package implements ViT-Shapley, an approach that makes Shapley values practical for vision transformer (ViT) models. The key idea is to learn an amortized explainer model that generates explanations in a single forward pass.
The high-level workflow for using ViT-Shapley is the following:
Please see our paper here for more details, as well as the work that ViT-Shapley builds on (KernelSHAP, FastSHAP).
git clone https://github.com/chanwkimlab/vit-shapley.git
cd vit-shapley
pip install -r requirements.txt
Commands for training and testing the models are available in the files under scripts
directory.
notebooks/2_1_benchmarking.ipynb
to obtain results.notebooks/2_2_ROAR.ipynb
to run retraining-based ROAR benchmarking.notebooks/3_plotting.ipynb
to plot the results.Pretrained model weights for vit-base models are available here.
You can try out ViT Shapley using Colab
If you use any part of this code and pretrained weights for your own purpose, please cite our paper.