Open ma-batita opened 2 years ago
exp = explainer.explain_instance(text, predict, num_features=6, top_labels=2, num_samples=3) you can modify parameters num_samples ,default :5000
You could batch the module()
forward pass within your predict(text)
function by only taking batch-sized chunks of the text
s and concatenating the probas
before returning. The num_samples
default of 5000 gives you a single batch of 5000 samples which is almost certainly causing the OOM. (also commented here)
Hello,
I am using lime to get interpretation about a classification problem. First, I am using Flaubert Tokenizer (also I tried different tokenizer and had the same problem) to transfer my text to tokens. Next I put the tokens as input to my model to get the probability with a softmax function (all of this is wrapped in a prediction method)
after, I created the explainer and all the other things thats go with ...
The problem is if I use a short message I get my result (exemple of
msg =bonjour ca va
). And if I run with a longer message I get OOM after 1min.Can you please see I did miss something here? Thnks!!