A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
with DeepExplain(session=K.get_session()) as de:
model_path = os.path.join(os.getcwd(),'N58_1','FOLD_00','model_E099_0.897.hdf5')
model = load_model(model_path)
flat = Reshape(target_shape=(256*256,))(model.layers[-1].output)
flat_model = Model(model.layers[0].input,flat)
input_tensor = flat_model.layers[0].input
target_tensor = flat_model(input_tensor)
xs = images[0][np.newaxis,...]
ys = np.zeros(256**2)
ys = ys[np.newaxis,...]
ys[0,15678] = 1
attributions_oc = de.explain('occlusion',target_tensor,input_tensor,xs,window_shape=(32,32,1),step=16)
Basically, I have an encoder-decoder neural network and I want to see if I can determine feature importances for particular output pixels. However, when I run this code,
The output is all zero (I tried plotting it in the session context)
And attributions_oc does not appear to exist. That is, it is not accessible in my jupyter notebook.
Yet, I have no errors. I am wondering what I am doing wrong?
I have the following code:
Basically, I have an encoder-decoder neural network and I want to see if I can determine feature importances for particular output pixels. However, when I run this code,
Yet, I have no errors. I am wondering what I am doing wrong?