apoorvumang / CronKGQA

ACL 2021: Question Answering over Temporal Knowledge Graphs
MIT License
92 stars 19 forks source link

about embedkgqa #8

Closed xdcui-nlp closed 2 years ago

xdcui-nlp commented 2 years ago

hi,embedkgqa model = cronkgqa,but which model is the real embedkgqa?

xdcui-nlp commented 2 years ago

Hello, I'm surprised. What model is model1? The results are the same as cronkgqa

apoorvumang commented 2 years ago

QA_model_EmbedKGQA_complex is the actual EmbedKGQA model. What do you mean by model1?

xdcui-nlp commented 2 years ago

QA_model_EmbedKGQA_complex is the actual EmbedKGQA model. What do you mean by model1?

Because I trained embedkgqa and model1 and got the same results, I don't know which model model1 represents

apoorvumang commented 2 years ago

Can you elaborate how you trained those two models? ie the commands used?

xdcui-nlp commented 2 years ago

Can you elaborate how you trained those two models? ie the commands used?

I used the commands of running code in readme

apoorvumang commented 2 years ago

Are these the exact commands you used?

 CUDA_VISIBLE_DEVICES=1 python -W ignore ./train_qa_model.py --frozen 1 --eval_k 1 --max_epochs 200 \
 --lr 0.00002 --batch_size 250 --mode train --tkbc_model_file tcomplex_17dec.ckpt \
 --dataset wikidata_big --valid_freq 3 --model model1 --valid_batch_size 50  \
 --save_to temp --lm_frozen 1 --eval_split valid

and

 CUDA_VISIBLE_DEVICES=1 python -W ignore ./train_qa_model.py --frozen 1 --eval_k 1 --max_epochs 200 \
 --lr 0.00002 --batch_size 250 --mode train --tkbc_model_file tcomplex_17dec.ckpt \
 --dataset wikidata_big --valid_freq 3 --model embedkgqa --valid_batch_size 50  \
 --save_to temp --lm_frozen 1 --eval_split valid

Also can you show what results you got with model1? In my experiments with the base model (model1) I do not get as good results as CronKGQA. It would be helpful if you could share the exact command you ran

xdcui-nlp commented 2 years ago

Also can you show what results you got with model1? In my experiments with the base model (model1) I do not get as good results as CronKGQA. It would be helpful if you could share the exact command you ran

yes,I used these commands, but I did't save the results of model1, I can train it again, then I can provide the results to you.Thank you very much for answering my question

xdcui-nlp commented 2 years ago

hi,I have another question,I trained embedkgqa_complex,but when I to eval it , the following error occurred: Traceback (most recent call last): File "./train_qa_model.py", line 551, in qa_model.load_state_dict(torch.load(filename)) File "/opt/current-env/anaconda3/envs/tf_2.x/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1223, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for QA_model_EmbedKGQA_complex: Missing key(s) in state_dict: "entity_embedding.weight", "time_embedding.weight". Unexpected key(s) in state_dict: "tkbc_model.embeddings.0.weight", "tkbc_model.embeddings.1.weight", "tkbc_model.embeddings.2.weight", "entity_time_embedding.weight", "answer_type_embedding.weight", "combine_all_entities_func_forReal.weight", "combine_all_entities_func_forReal.bias", "combine_all_entities_func_forCmplx.weight", "combine_all_entities_func_forCmplx.bias", "linear2.weight", "linear2.bias", "bn2.weight", "bn2.bias", "bn2.running_mean", "bn2.running_var", "bn2.num_batches_tracked".

apoorvumang commented 2 years ago

This is because embedkgqa_complex needs non-temporal complex embeddings. You can train those separately, but we will upload a trained checkpoint soon. It would be great if you could create a separate issue for this

xdcui-nlp commented 2 years ago

This is because embedkgqa_complex needs non-temporal complex embeddings. You can train those separately, but we will upload a trained checkpoint soon. It would be great if you could create a separate issue for this

ok, thank you