Open creisle opened 1 year ago
Hi there,
I hope you're doing well. I noticed your GitHub issue regarding the prediction differences in the transformers-interpret library (#127), and I'm experiencing a similar issue myself.
I've been trying to use the library with a custom fine-tuned model, but like you, I'm finding that the predicted label doesn't match my expectations when using the explainer.
Have you had any luck resolving this issue since you posted it? If so, I would greatly appreciate any insights or tips you could share.
I think I may be doing something wrong, but I can't seem to get this working. I can get it to produce the diagram and attribution scores but when I check what label it has predicted it doesn't match the one I expect. For example, I am using a custom fine-tuned model which i evaluate like so
Whereas when I use the same model, and the same inputs with the explainer I do not get the same label prediction.
Even when I subclass the explainer class to ensure the tokenizer is operating in the same way it still predicts incorrectly. Model and tokenizer were set up identical to above