AONE-NLP / DiffuTKG

9 stars 0 forks source link

Failed to achieve the reported results under the default settings of the repository. #3

Open Kizzen983 opened 1 month ago

Kizzen983 commented 1 month ago

Hello! Your outstanding work is impressive. Unfortunately, I am unable to reproduce the results you reported under default settings. Could you provide me with detailed hyperparameter settings or more guidance? May I ask if there were any problems with my experiment?

The following are my hyperparameter settings in the ICEWS14s dataset, which are the default settings in the repository. Under this hyperparameter setting, BestValMRR is 0.4588 and BestTestMRR is 0.4402. Other datasets are also unable to achieve the results reported under this setting. Namespace(accumulation_steps=2, add_frequence=False, add_info_nce_loss=False, add_memory=False, add_ood_loss_energe=True, add_static=True, add_static_graph=True, alias='_entityPrediction_21', beta_end=0.02, beta_start=0.0001, concat_con=True, dataset='ICEWS14', diffusion_steps=200, dropout=0.2, emb_dropout=0.2, encoder='uvrgcn', encoder_params=None, evaluate_every=1, filter=True, gpu=1, grad_norm=1.0, heads=4, hidden_act='gelu', hidden_size=200, his_max_len=128, k_step=0, kl_interst=False, lambda_uncertainty=0.01, layer_norm=False, layer_norm_gcn=False, lr=0.001, mask_rate=0, max_len=64, n_bases=100, n_epochs=100, n_hidden=200, noise_schedule='linear', num_blocks=2, num_blocks_cross=0, num_gpu=1, patience=20, pattern_noise_radio=1, refinements_radio=0, rescale_timesteps=False, sample_nums=1, scale=50, schedule_sampler_name='lossaware', seed=2026, seen_addition=False, self_loop=True, temperature_object=0.5, test=False, train_history_len=3, update_ent_rate=0, update_rel_rate=0, wd=1e-05) Thank you very much for reading.

AONE-NLP commented 1 month ago

Thank you for sharing your acknowledgement. I see that you're using ICEWS14s, while my results were based on the ICEWS14 dataset. This might be one of the reasons for the discrepancy in the results.

c18158977856 commented 1 month ago

Can I ask you how to fix the problem of not having a graph_dict.pkl file

Kizzen983 commented 1 month ago

Can I ask you how to fix the problem of not having a graph_dict.pkl file

history_glist = [graph_dict[tim] for tim in input_time_list] # choose train data with time at about line 70, and it is used in the forward process of the model. If you find in the forward function of the model_21.py, you will find that 'glist' has not been used, so you can safely remove the code about this file.

Kizzen983 commented 1 month ago

Thank you for sharing your acknowledgement. I see that you're using ICEWS14s, while my results were based on the ICEWS14 dataset. This might be one of the reasons for the discrepancy in the results.

Thank you very much for your reply. To ensure error avoidance, I have redownloaded the dataset you provided and retrained and tested it on ICEWS14 and ICEWS05-15. After my confirmation, 'ICEWS14s' was a mistake I made when writing it. Unfortunately, the results of the report have not been achieved yet. Specifically, taking MRR as an example, when training on ICEWS14, the BestValMRR is 0.4588, BestTestMRR is 0.4389, and when testing, which use test parameters, the mrr result is 0.4419. I also conducted experiments on the ICEWS05-15 dataset, but I was unable to achieve the reported results. I made some modifications to the training code without modifying the model and parameters, as shown below:

  1. I deleted the code related to wandb, I don't need it.
  2. I modified the location of the dataset and the location where the model was loaded, corresponding to the absolute path on my hard drive.
  3. I deleted the code for reading head_dets.json and graph-dict.pkl because these files do not exist and were not used for model training and testing. If I don't delete these files, the code will throws errors (which others have also mentioned).
  4. I have added the parameter 'disable_tqdm' to disable the progress bar display of tqdm. For your convenience in checking, I am providing you with the logs of my operation, the model, and the modified main_wowandb.py

Thank you again for taking a look at this issue. I want you to know that your response is very helpful for my research.

ICEWS14.zip ICEWS05_15.zip main_wowandb.zip

YiqZZZhu commented 2 weeks ago

Thank you for sharing your acknowledgement. I see that you're using ICEWS14s, while my results were based on the ICEWS14 dataset. This might be one of the reasons for the discrepancy in the results.

Thank you very much for your reply. To ensure error avoidance, I have redownloaded the dataset you provided and retrained and tested it on ICEWS14 and ICEWS05-15. After my confirmation, 'ICEWS14s' was a mistake I made when writing it. Unfortunately, the results of the report have not been achieved yet. Specifically, taking MRR as an example, when training on ICEWS14, the BestValMRR is 0.4588, BestTestMRR is 0.4389, and when testing, which use test parameters, the mrr result is 0.4419. I also conducted experiments on the ICEWS05-15 dataset, but I was unable to achieve the reported results. I made some modifications to the training code without modifying the model and parameters, as shown below:

  1. I deleted the code related to wandb, I don't need it.
  2. I modified the location of the dataset and the location where the model was loaded, corresponding to the absolute path on my hard drive.
  3. I deleted the code for reading head_dets.json and graph-dict.pkl because these files do not exist and were not used for model training and testing. If I don't delete these files, the code will throws errors (which others have also mentioned).
  4. I have added the parameter 'disable_tqdm' to disable the progress bar display of tqdm. For your convenience in checking, I am providing you with the logs of my operation, the model, and the modified main_wowandb.py

Thank you again for taking a look at this issue. I want you to know that your response is very helpful for my research.

ICEWS14.zip ICEWS05_15.zip main_wowandb.zip

Maybe the random seed the author provided isn’t the best fit for your machine. You could try switching it up—I used 10 on V100, and got an MRR of around 48.3 on ICEWS14.

Kizzen983 commented 2 weeks ago

Thank you for sharing your acknowledgement. I see that you're using ICEWS14s, while my results were based on the ICEWS14 dataset. This might be one of the reasons for the discrepancy in the results.

Thank you very much for your reply. To ensure error avoidance, I have redownloaded the dataset you provided and retrained and tested it on ICEWS14 and ICEWS05-15. After my confirmation, 'ICEWS14s' was a mistake I made when writing it. Unfortunately, the results of the report have not been achieved yet. Specifically, taking MRR as an example, when training on ICEWS14, the BestValMRR is 0.4588, BestTestMRR is 0.4389, and when testing, which use test parameters, the mrr result is 0.4419. I also conducted experiments on the ICEWS05-15 dataset, but I was unable to achieve the reported results. I made some modifications to the training code without modifying the model and parameters, as shown below:

  1. I deleted the code related to wandb, I don't need it.
  2. I modified the location of the dataset and the location where the model was loaded, corresponding to the absolute path on my hard drive.
  3. I deleted the code for reading head_dets.json and graph-dict.pkl because these files do not exist and were not used for model training and testing. If I don't delete these files, the code will throws errors (which others have also mentioned).
  4. I have added the parameter 'disable_tqdm' to disable the progress bar display of tqdm. For your convenience in checking, I am providing you with the logs of my operation, the model, and the modified main_wowandb.py

Thank you again for taking a look at this issue. I want you to know that your response is very helpful for my research. ICEWS14.zip ICEWS05_15.zip main_wowandb.zip

Maybe the random seed the author provided isn’t the best fit for your machine. You could try switching it up—I used 10 on V100, and got an MRR of around 48.3 on ICEWS14.

This is indeed a possibility I never thought of, thank you. I will try to modify it later and report my experiment afterwards.

rtoooi21 commented 1 hour ago

Thank you for sharing your acknowledgement. I see that you're using ICEWS14s, while my results were based on the ICEWS14 dataset. This might be one of the reasons for the discrepancy in the results.

Thank you very much for your reply. To ensure error avoidance, I have redownloaded the dataset you provided and retrained and tested it on ICEWS14 and ICEWS05-15. After my confirmation, 'ICEWS14s' was a mistake I made when writing it. Unfortunately, the results of the report have not been achieved yet. Specifically, taking MRR as an example, when training on ICEWS14, the BestValMRR is 0.4588, BestTestMRR is 0.4389, and when testing, which use test parameters, the mrr result is 0.4419. I also conducted experiments on the ICEWS05-15 dataset, but I was unable to achieve the reported results. I made some modifications to the training code without modifying the model and parameters, as shown below:

  1. I deleted the code related to wandb, I don't need it.
  2. I modified the location of the dataset and the location where the model was loaded, corresponding to the absolute path on my hard drive.
  3. I deleted the code for reading head_dets.json and graph-dict.pkl because these files do not exist and were not used for model training and testing. If I don't delete these files, the code will throws errors (which others have also mentioned).
  4. I have added the parameter 'disable_tqdm' to disable the progress bar display of tqdm. For your convenience in checking, I am providing you with the logs of my operation, the model, and the modified main_wowandb.py

Thank you again for taking a look at this issue. I want you to know that your response is very helpful for my research. ICEWS14.zip ICEWS05_15.zip main_wowandb.zip

Maybe the random seed the author provided isn’t the best fit for your machine. You could try switching it up—I used 10 on V100, and got an MRR of around 48.3 on ICEWS14.

Thanks so much for the tip! It really helped me out.