huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
135.77k stars 27.18k forks source link

IndexError: index out of range in self while using longformers when i try to pass token_type_ids #9162

Closed yuvarajvc closed 3 years ago

yuvarajvc commented 3 years ago

Who can help

To reproduce

import torch from transformers import LongformerModel, LongformerTokenizer model = LongformerModel.from_pretrained('allenai/longformer-base-4096') tokenizer = LongformerTokenizer.from_pretrained('roberta-base') SAMPLE_TEXT = ' '.join(['Hello world! '] * 100) # long input document input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1 attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device) global_attention_mask = torch.zeros(input_ids.shape, dtype=torch.long, device=input_ids.device) segment_ids = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device) outputs = model(input_ids=input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask,token_type_ids=segment_ids)

Error info

IndexError Traceback (most recent call last)

in ----> 1 outputs = model(input_ids=input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask,token_type_ids=segment_ids) ~\.conda\envs\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~\.conda\envs\env\lib\site-packages\transformers\modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states) 675 encoder_attention_mask=None, 676 output_attentions=output_attentions, --> 677 output_hidden_states=output_hidden_states, 678 ) 679 ~\.conda\envs\env\lib\site-packages\transformers\modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states) 751 752 embedding_output = self.embeddings( --> 753 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds 754 ) 755 encoder_outputs = self.encoder( ~\.conda\envs\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~\.conda\envs\env\lib\site-packages\transformers\modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 66 67 return super().forward( ---> 68 input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds 69 ) 70 ~\.conda\envs\env\lib\site-packages\transformers\modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 178 inputs_embeds = self.word_embeddings(input_ids) 179 position_embeddings = self.position_embeddings(position_ids) --> 180 token_type_embeddings = self.token_type_embeddings(token_type_ids) 181 182 embeddings = inputs_embeds + position_embeddings + token_type_embeddings ~\.conda\envs\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~\.conda\envs\env\lib\site-packages\torch\nn\modules\sparse.py in forward(self, input) 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, --> 126 self.norm_type, self.scale_grad_by_freq, self.sparse) 127 128 def extra_repr(self) -> str: ~\.conda\envs\env\lib\site-packages\torch\nn\functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1850 # remove once script supports set_grad_enabled 1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1853 1854 IndexError: index out of range in self ## Expected behavior If i am not passing a segment_id (token_type_ids) then model is executing or if i am passing segment_id's as zeros then its getting executed when i am passing segment_id's 1 or 0's and 1's then i am getting an error as index out of range in self.
yuvarajvc commented 3 years ago

Can we assume that "token_type_ids" will not support in longformer?

NielsRogge commented 3 years ago

Yes, actually yesterday a PR (#9152) was merged to update the docs stating the LongFormer does not support token type ids.

yuvarajvc commented 3 years ago

@NielsRogge Thank you

github-actions[bot] commented 3 years ago

This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.

If you think this still needs to be addressed please comment on this thread.