zjukg / KGTransformer

[Paper][WWW2023] Structure Pre-training and Prompt Tuning for Knowledge Graph Transfer
https://arxiv.org/pdf/2303.03922.pdf
46 stars 5 forks source link

Zero-shot Image Classification failed to reproduce #4

Closed YangL256 closed 1 year ago

YangL256 commented 1 year ago

Dear author, after reading your research, I began to try to reproduce it. However, after completing the pre-training and saving the model, there was a problem when the task zero shot classification began to reappear. First of all, the model suffix saved by each epoch is different during pre-training. Secondly, when running downstream tasks, the file with ep4_ZSL suffix should be loaded, but I failed to load it. I tried to modify the suffix and put the pre-trained model into the path, but failed to load (this model has 900M). Perhaps for this reason, my training results for downstream tasks were very poor, and the saved model suffix was also strange. So I want to ask the first question about model preservation, and the second question is why downstream tasks fail again.Thank you very much for your reply!

YushanZhu commented 1 year ago

Yes, the model suffix saved by each epoch is different during pre-training, and the suffix ".epX" means that this is the model after the Xth epoch. You can load the pre-trained model after running "get_pretrained_KGTransformer_parameters.py", where Line5 is used to modify the suffix and just change " target_file = pretrain_model + '_delWE' " to " target_file = pretrain_model + '_ZSL' " is ok. Hope it can help you.

WEIYanbin1999 commented 1 year ago

Dear author, after reading your research, I began to try to reproduce it. However, after completing the pre-training and saving the model, there was a problem when the task zero shot classification began to reappear. First of all, the model suffix saved by each epoch is different during pre-training. Secondly, when running downstream tasks, the file with ep4_ZSL suffix should be loaded, but I failed to load it. I tried to modify the suffix and put the pre-trained model into the path, but failed to load (this model has 900M). Perhaps for this reason, my training results for downstream tasks were very poor, and the saved model suffix was also strange. So I want to ask the first question about model preservation, and the second question is why downstream tasks fail again.Thank you very much for your reply!

Dear YangL256, Have you successfully reproduce the results? Did you trying to run triple classification task?

YangL256 commented 1 year ago

I tried to run it again, but the model still wouldn't load, probably because of the '2hop_related_triples.pkl' file. This unloadable model was created when I resized the BIG dataset by reading the first few tens of thousands of lines of the file. I would like to ask the author, '2hop_related_triples.pkl' file I changed a large memory server ran out, there are more than 10G, is this result wrong

YushanZhu commented 1 year ago

I tried to run it again, but the model still wouldn't load, probably because of the '2hop_related_triples.pkl' file. This unloadable model was created when I resized the BIG dataset by reading the first few tens of thousands of lines of the file. I would like to ask the author, '2hop_related_triples.pkl' file I changed a large memory server ran out, there are more than 10G, is this result wrong

It's not wrong, '2hop_related_triples.pkl' file is indeed relatively large(14G+) since it stores each node's own 2-hop subgraph, and run pretrain may need a large memory server(we use server ~200G of RAM). We are going to rerun the experiment as soon, and then upload our logs of intermediate output which can be referred to.

YangL256 commented 1 year ago

Thanks to the author for sharing, I will start the experiment again as soon as possible!