Closed cwmeijer closed 3 years ago
We have tested the DeepLift and DeepLiftSHAP via the captum implementation. The results are promising. However, captum only supports pytorch and it can not work with onnx model (it will raise an error if an onnx model is passed to the explainer). The original implementation of DeepLift uses tensorflow. Therefore, we can not simply implement DeepLiftSHAP in dianna as a wrapper of other existing implementations (especially if we want to use captum, which is a very nice library).
There are two solutions: (1) use onnx model and borrow some functions from other DeepLift/DeepLiftSHAP implementations (e.g. captum) to produce our own implementations, specifically for onnx models. (2) convert onnx models to torch models and wrap up captum explainer directly (a bit nasty). We need to decide which direction to go.
There are two solutions: (1) use onnx model and borrow some functions from other DeepLift/DeepLiftSHAP implementations (e.g. captum) to produce our own implementations, specifically for onnx models. (2) convert onnx models to torch models and wrap up captum explainer directly (a bit nasty). We need to decide which direction to go.
I imagined option (1) when writing the proposal.
Consulting more carefully the document with selected XAI methods for DIANNA, I see that we voted No for DeepLIFT! Is this the best method to be under SHAP?
During the standup we decided to go for option 2 first (quick conversion of the model to pytorch, and then wrap captum). We can then later see if we want to expand that or go for option 1 completely.
Is there a notebook for the exploration of this method?
[x] How to run it?
[x] Do they work on our data? (or can we replicate author's results?)
[x] How can we implement it into Dianna?
Either images or text or both.