sicara / tf-explain

Interpretability Methods for tf.keras models with Tensorflow 2.x
https://tf-explain.readthedocs.io
MIT License
1.02k stars 112 forks source link

How about the XRAI which is based on Integrated saliency #140

Open DLyzhou opened 4 years ago

DLyzhou commented 4 years ago

Thanks for your great work! The XRAI might be a more intuitive interpretability tool for CNN models, which is based on the Integrated saliency.

Paper: https://arxiv.org/abs/1906.02825 TF 1.x version: https://github.com/PAIR-code/saliency