Closed Kosisochi closed 3 years ago
OK, I figured it out. I was selecting layer 1 which the sample code came with and in my BiLSTM model layer 1 is the BILSTM layer while layer 0 is the Embedding layer. So when I selected layer 0 which has the shape None, 38,300
, it worked fine.
Thank you for this library. It has made my life easier.
@Kosisochi thanks for the feedback! Yes, for text applications you would usually want to calculate the attributions with respect to the embedding layer and then sum over the embedding dimension to get attributions per token. Great to hear you managed to make it work with respect to the correct layer.
I tried using the sample code of the IMDB dataset and CNN. And it works fine. I tried to load an already pre-trained BiLSTM I have, and the shape of the attribution was 2D instead of 3D ie
(nb_samples, max_len, embedding_dim).
The shape being returned is(nb_samples, 200)
when I doprint('Attributions shape:', explanation.attributions[0]shape).
Then I tried implementing the BiLSTM so as to use it immediately after training instead of loading a trained model and it has the same problem. Because of this, I cannot sum the attributions since I get this errornumpy.AxisError: axis 2 is out of bounds for array of dimension 2.
The BiLSTM has a W2V embedding input and a Dense layer at the output. It is compiled withloss='sparse_categorical_crossentropy'
, andoptimizer='Adagrad.
Is using a different classifier allowed while using Integrated gradients for text? Can anyone explain why for BiLSTM a 2D attribution shape is returned and what the 200 represents?