SeldonIO / alibi

Algorithms for explaining machine learning models
https://docs.seldon.io/projects/alibi/en/stable/
Other
2.38k stars 249 forks source link

Using a different classifier (BiLSTM) to train the model to be investigated for Integrated Gradients #369

Closed Kosisochi closed 3 years ago

Kosisochi commented 3 years ago

I tried using the sample code of the IMDB dataset and CNN. And it works fine. I tried to load an already pre-trained BiLSTM I have, and the shape of the attribution was 2D instead of 3D ie (nb_samples, max_len, embedding_dim). The shape being returned is (nb_samples, 200) when I do print('Attributions shape:', explanation.attributions[0]shape). Then I tried implementing the BiLSTM so as to use it immediately after training instead of loading a trained model and it has the same problem. Because of this, I cannot sum the attributions since I get this error numpy.AxisError: axis 2 is out of bounds for array of dimension 2. The BiLSTM has a W2V embedding input and a Dense layer at the output. It is compiled with loss='sparse_categorical_crossentropy', and optimizer='Adagrad.

Is using a different classifier allowed while using Integrated gradients for text? Can anyone explain why for BiLSTM a 2D attribution shape is returned and what the 200 represents?

Kosisochi commented 3 years ago

OK, I figured it out. I was selecting layer 1 which the sample code came with and in my BiLSTM model layer 1 is the BILSTM layer while layer 0 is the Embedding layer. So when I selected layer 0 which has the shape None, 38,300 , it worked fine.

Thank you for this library. It has made my life easier.

jklaise commented 3 years ago

@Kosisochi thanks for the feedback! Yes, for text applications you would usually want to calculate the attributions with respect to the embedding layer and then sum over the embedding dimension to get attributions per token. Great to hear you managed to make it work with respect to the correct layer.