cdqa-suite / cdQA

⛔ [NOT MAINTAINED] An End-To-End Closed Domain Question Answering System.
https://cdqa-suite.github.io/cdQA-website/
Apache License 2.0
616 stars 191 forks source link

AttributeError: 'BertConfig' object has no attribute 'is_decoder' #310

Closed cppntn closed 4 years ago

cppntn commented 4 years ago

Hello, thanks for this wonderful work.

I am trying to reproduce this code but I get this error:

`AttributeError Traceback (most recent call last)

in 3 json_data['data'][0]['paragraphs'][0]['qas'].append({"id":i, "question":query}) 4 examples, features = processor.fit_transform(X=json_data['data']) ----> 5 qa_model.predict(X=(examples, features)) 6 ~/anaconda3/lib/python3.7/site-packages/cdqa/reader/bertqa_sklearn.py in predict(self, X, n_predictions, retriever_score_weight, return_all_preds) 1451 inputs['token_type_ids'] = batch[2] 1452 example_indices = batch[3] -> 1453 batch_start_logits, batch_end_logits = self.model(**inputs) 1454 1455 for i, example_index in enumerate(example_indices): ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, start_positions, end_positions) 1244 position_ids=position_ids, 1245 head_mask=head_mask, -> 1246 inputs_embeds=inputs_embeds) 1247 1248 sequence_output = outputs[0] ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask) 671 # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length] 672 if attention_mask.dim() == 2: --> 673 if self.config.is_decoder: 674 batch_size, seq_length = input_shape 675 seq_ids = torch.arange(seq_length, device=device) AttributeError: 'BertConfig' object has no attribute 'is_decoder'` Could you please help me with this?
cppntn commented 4 years ago

I uninstalled transformers (which I installed cloning the repo) and ran:

pip install transformers

this solved the problem