allenai / commonsense-kg-completion

MIT License
105 stars 16 forks source link

there is a problem at runtime #5

Closed hvuehu closed 3 years ago

hvuehu commented 4 years ago

When I'm running run kbc subgraph.py An error occurred .The error code line is in Bert feature extractor.py File.

Traceback (most recent call last): File "src/run_kbc_subgraph.py", line 534, in main(args) File "src/run_kbc_subgraph.py", line 115, in main args.sim_relations) File "src/run_kbc_subgraph.py", line 52, in load_data train_network.add_sim_edges_bert() File "D:\commonsense-kg-completion-master\src\reader.py", line 84, in add_sim_edges_bert bert_model = BertLayer(self.dataset) File "D:\commonsense-kg-completion-master\src\bert_feature_extractor.py", line 185, in init self.bert_model.to(self.device) AttributeError: 'collections.OrderedDict' object has no attribute 'to'

How can I solve this problem? I look forward to your answer

chaitanyamalaviya commented 4 years ago

I am not able to get this error. Can you specify the command you ran? And the version of transformers / pytorch you are using? For reference, I am using transformers==3.0.1 and torch==1.2.0.

hvuehu commented 4 years ago

I am not able to get this error. Can you specify the command you ran? And the version of transformers / pytorch you are using? For reference, I am using transformers==3.0.1 and torch==1.2.0.

I run it first simple_lm_finetuning.py file,get the trained parameters. Then run run_kbc_subgraph.py file. The command is "python -u src/run_kbc_subgraph.py --dataset conceptnet_部分 --evaluate-every 10 --n-layers 2 --graph-batch-size 30 --sim_relations --bert_concat". When adding a synthetise sim edge, load through run finetuning.py Generated lm_pytorch_model.bin file. Maybe there is something wrong with my running steps?

This line of code “self.bert_model=torch.load(output_model_file, map_location='cpu')”,self.bertmodel is an object of type 'collections.OrderedDict'. how to implement it "self.bert model.to ( self.device )"?

I tried to change the torch version and transformers version, the error is still the same.

I look forward to your answer.

chaitanyamalaviya commented 4 years ago

I think this is because the new version of transformers repo requires using the BertForPreTraining.from_pretrained(model_directory) method for loading models. You should be able to load the model using that function.

hvuehu commented 4 years ago

I think this is because the new version of transformers repo requires using the BertForPreTraining.from_pretrained(model_directory) method for loading models. You should be able to load the model using that function.

According to your suggestion, this error has been solved. Thank you very much!