Open PPere5 opened 4 years ago
Quick update, if I try to use a dropout layer and no batchnorm I get the following error:
Traceback (most recent call last):
File "c:/Users/p/Models/IP_Models/Neural_Network/IP_Conversion_predict.py", line 61, in <module>
shap_values = explainer.shap_values(data)
File "C:\Users\p\Documents\Virtual_Environments\env-ml\lib\site-packages\shap\explainers\deep\__init__.py", line 119, in shap_values
return self.explainer.shap_values(X, ranked_outputs, output_rank_order, check_additivity=check_additivity)
File "C:\Users\p\Documents\Virtual_Environments\env-ml\lib\site-packages\shap\explainers\deep\deep_tf.py", line 334, in shap_values
"as a github issue, with a reproducable example if possible so we can debug it." % np.abs(diffs).max()
AssertionError: The SHAP explanations do not sum up to the model's output! This is either because of a rounding error or because an operator in your computation graph was not fully supported. If the sum difference of 1.027870 is significant compared the scale of your model outputs please post as a github issue, with a reproducable example if possible so we can debug it.
@PPere5 what is the activation of the last output layer? sigmoid ?
@PPere5 what is the activation of the last output layer? sigmoid ?
Yes
Have you tried adding this line before using the DeepExplainer
?
shap.explainers.deep.deep_tf.op_handlers["AddV2"] = shap.explainers.deep.deep_tf.passthrough
Hi, I'm facing the same original issue. I tried using @metalwhale's suggestion, but then I got an another lookup error. Any suggestions to resolve this issue?
Code Snippet:
background = X_train[np.random.choice(X_train.shape[0], 100, replace=False)]# we use the first 100 training examples as our background dataset to integrate over
shap.explainers.deep.deep_tf.op_handlers["AddV2"] = shap.explainers.deep.deep_tf.passthrough
explainer = shap.DeepExplainer(model, background)
shap_values = explainer.shap_values(background[:10])
explainer.expected_value[0]
Error Log:
----> 9 shap_values = explainer.shap_values(background[:10])
10 explainer.expected_value[0]
11 # explainer.shap_values
13 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise
StagingError: in user code:
/usr/local/lib/python3.6/dist-packages/shap/explainers/deep/deep_tf.py:244 grad_graph *
x_grad = tape.gradient(out, shap_rAnD)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/backprop.py:1048 gradient **
unconnected_gradients=unconnected_gradients)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/imperative_grad.py:77 imperative_grad
compat.as_str(unconnected_gradients.value))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/backprop.py:145 _gradient_function
grad_fn = ops._gradient_registry.lookup(op_name) # pylint: disable=protected-access
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/registry.py:97 lookup
"%s registry has no entry for: %s" % (self._name, name))
LookupError: gradient registry has no entry for: shap_TensorListStack
When running explainer.expected_value[0] I'm also getting the error
LookupError: gradient registry has no entry for: shap_TensorListStack
I was having the same issue. Using GradientExplainer instead of DeepExplainer is a temporary solution. Please see: https://github.com/slundberg/shap/issues/885#issuecomment-564778328
I experienced a similar error with another layer and GradientExplainer. Updating tensorflow solved the issue for me.
I am expressing the same error: explainer = shap.DeepExplainer(model,x_train[:100]) shap_values = explainer.shap_values(x_test[:10])
'' LookupError: gradient registry has no entry for: shap_TensorListStack"
Yeah, I think the best option is to update Tensorflow and SHAP. I am running tf v2.1 and shap v0.36, and I still have issues with DeepExplainer sometimes. However, GradientExplainer seems to work fine, although you do not have the same functions.
@juliotorrest , thank for you response, I am using current of TensorFlow and shape as:
TensorFlow version: 2.2.1 Shap version: 0.36.0
but same problem
Even with GradientExplainer?
Also, in this thread there were some mentions on how to initialize the gradient. Maybe that works.
Anyone know whether this issue can be solved by either downgrading/updating either TF/Keras or Shap? Not sure which versions that shap is targetting with their 0.36 release.
I'm having the same issue using a DeepExplainer on TF 2.3.1 and SHAP 0.36.0. Although it doesn't look nice, adding this line (slightly changed from a snippet of the previous comment) apparently solved my problem and I've no idea if it affects the results of the analysis
shap.explainers._deep.deep_tf.op_handlers["AddV2"] = shap.explainers._deep.deep_tf.passthrough
Using TF 2.3.0 and SHAP 0.35.0. This seems to have solved the problem for me:
from tensorflow.compat.v1.keras.backend import get_session
tf.compat.v1.disable_v2_behavior()
Using TF 2.3.0 and SHAP 0.35.0. This seems to have solved the problem for me:
from tensorflow.compat.v1.keras.backend import get_session tf.compat.v1.disable_v2_behavior() ```Saved my life
if you are running in jupyter notebook, then just restart the kernel and run again.
shap.explainers.deep.deep_tf.op_handlers["AddV2"] = shap.explainers.deep.deep_tf.passthrough
@SudilHasitha
try
shap.explainers._deep.deep_tf.op_handlers["AddV2"] = shap.explainers._deep.deep_tf.passthrough
as suggested by @carloszanella
I built an ANN with TensorFlow 2.3.0 and Keras 2.4.3, and the combination of tf.compat.v1.disable_v2_behavior()
and e = shap.DeepExplainer(model, X_train)
and shap_values = e.shap_values(X_test, check_additivity=False)
makes things run. However, the resulting SHAP values I get, in addition to not satisfying the additivity check (the mean model prediction plus the sum of the SHAP values != model prediction), also do not make sense in the context of model predictions. For example, a test instance that the ANN predicts to have a target property value of 10 will have a lower sum of SHAP values than a test instance that the ANN predicts to have a target property value of 0. So it seems something goes very wrong for SHAP values if you force the SHAP analysis to run in this way. I've been told that this additivity issue does not come up when using TensorFlow 1.14, however.
tldr Forcing the SHAP analysis to run may lead to invalid SHAP values.
Issue #1238 seems related to this problem
Update: using KernelExplainer instead of DeepExplainer seems to fix things! See #1199
Problem solved thank you @emarkou @gianmarco-terrones
shap.explainers.deep.deep_tf.op_handlers["AddV2"] = shap.explainers.deep.deep_tf.passthrough
shap.explainers._deep.deep_tf.op_handlers["AddV2"] = shap.explainers._deep.deep_tf.passthrough
[metalwhale]'s anser missed a ‘_' before deep
Hi,
After banging my head against the problem for the past hours I decided to please ask for your assistance.
I am working with a keras model with the following layout:
after training the model, running:
explainer = shap.DeepExplainer(loaded_model, data) shap_values = explainer.shap_values(data, check_additivity= False)
(sorry I cannot share the whole code confidentiality and such...)
will return an error:
Same code runs fine if I alter the model to remove batchnorm layers. I am using tensorflow 2.1. Could you please shed some light on this issue?
Thank you!