A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
I can run the mnist_cnn_keras example as is without any problem, however when I try to add in a BatchNormalization layer I get the following error:
You must feed a value for placeholder tensor 'conv2d_1_input' with dtype float and shape [?,1,28,28]
[[Node: conv2d_1_input = Placeholder[dtype=DT_FLOAT, shape=[?,1,28,28], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]
[[Node: conv2d_2/BiasAdd/_641 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_676_conv2d_2/BiasAdd", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
From an initial glance this appears to be due to the keras_learning_phase not being passed on correctly, although when I took a look at the source it does seem that you do set it. This only happens with DeepLIFT; all other methods work fine.
For reference, here's the network I tried to fit (assuming image_data_format = 'channels_first')
I can run the mnist_cnn_keras example as is without any problem, however when I try to add in a BatchNormalization layer I get the following error:
You must feed a value for placeholder tensor 'conv2d_1_input' with dtype float and shape [?,1,28,28] [[Node: conv2d_1_input = Placeholder[dtype=DT_FLOAT, shape=[?,1,28,28], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]] [[Node: conv2d_2/BiasAdd/_641 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_676_conv2d_2/BiasAdd", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
From an initial glance this appears to be due to the keras_learning_phase not being passed on correctly, although when I took a look at the source it does seem that you do set it. This only happens with DeepLIFT; all other methods work fine.
For reference, here's the network I tried to fit (assuming image_data_format = 'channels_first')
model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(BatchNormalization(axis=1)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes)) model.add(Activation('softmax'))