cdpierse / transformers-interpret

Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Apache License 2.0
1.3k stars 97 forks source link

zero_shot_explainer not working xlm-roberta-large-xnli-anli #70

Open SiyaoZheng opened 3 years ago

SiyaoZheng commented 3 years ago

model_name="vicgalle/xlm-roberta-large-xnli-anli"

tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name)

IndexError Traceback (most recent call last)

in 3 zero_shot_explainer = ZeroShotClassificationExplainer(model, tokenizer) 4 ----> 5 word_attributions = zero_shot_explainer( 6 "国家中国美国", 7 labels = ["国家", "中国", "美国"], /opt/conda/lib/python3.8/site-packages/transformers_interpret/explainers/zero_shot_classification.py in __call__(self, text, labels, embedding_type, hypothesis_template, include_hypothesis, internal_batch_size, n_steps) 314 self.hypothesis_labels = [hypothesis_template.format(label) for label in labels] 315 --> 316 predicted_text_idx = self._get_top_predicted_label_idx( 317 text, self.hypothesis_labels 318 ) /opt/conda/lib/python3.8/site-packages/transformers_interpret/explainers/zero_shot_classification.py in _get_top_predicted_label_idx(self, text, hypothesis_labels) 143 ) 144 attention_mask = self._make_attention_mask(input_ids) --> 145 preds = self._get_preds( 146 input_ids, token_type_ids, position_ids, attention_mask 147 ) /opt/conda/lib/python3.8/site-packages/transformers_interpret/explainers/question_answering.py in _get_preds(self, input_ids, token_type_ids, position_ids, attention_mask) 212 ): 213 if self.accepts_position_ids and self.accepts_token_type_ids: --> 214 preds = self.model( 215 input_ids, 216 token_type_ids=token_type_ids, /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 993 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 994 --> 995 outputs = self.roberta( 996 input_ids, 997 attention_mask=attention_mask, /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict) 685 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) 686 --> 687 embedding_output = self.embeddings( 688 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds 689 ) /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /opt/conda/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 117 inputs_embeds = self.word_embeddings(input_ids) 118 position_embeddings = self.position_embeddings(position_ids) --> 119 token_type_embeddings = self.token_type_embeddings(token_type_ids) 120 121 embeddings = inputs_embeds + position_embeddings + token_type_embeddings /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /opt/conda/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input) 122 123 def forward(self, input: Tensor) -> Tensor: --> 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, 126 self.norm_type, self.scale_grad_by_freq, self.sparse) /opt/conda/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1812 # remove once script supports set_grad_enabled 1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1815 1816 IndexError: index out of range in self
arianpasquali commented 2 years ago

Same here for a xlm-roberta-base sentiment analysis model.

ARyelund commented 2 years ago

Did anyone managed to solve this problem with xlm-roberta-base?

vaishn99 commented 1 year ago

Is it working for the facebook/bart-large-mnli model