cdpierse / transformers-interpret

Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Apache License 2.0
1.27k stars 96 forks source link

Issue with Zero Shot Classifier #133

Open ericbugin1 opened 1 year ago

ericbugin1 commented 1 year ago

captum==0.6.0 transformers==4.28.1 transformers-interpret==0.10.0 python 3.7.4

When I run the code below, I get error AssertionError: Forward hook did not obtain any outputs for given layer I have tried running it locally and in Google Colab with both having the same result


from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers_interpret import ZeroShotClassificationExplainer

tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")

model = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli")

zero_shot_explainer = ZeroShotClassificationExplainer(model, tokenizer)

word_attributions = zero_shot_explainer(
    "Today apple released the new Macbook showing off a range of new features found in the proprietary silicon chip computer. ",
    labels = ["finance", "technology", "sports"],
)