Open bx-li opened 10 months ago
I got the expected results on ICEWS14s at about epoch 11.
you can try other dataset, not only ICEWS14s. And I fail to run other datasets rather than ICEWS14s.
我在 ICEWS14 上获得了大约 11 元的预期结果。
Shouldn't the model be tested after its loss converges? The loss is still decreasing here. If we choose the 10th epoch as the result, why do we still set a default training of 500 epochs
您可以尝试其他数据集,而不仅仅是 ICEWS14s。而且我无法运行其他数据集而不是 ICEWS14。
you can try other dataset, not only ICEWS14s. And I fail to run other datasets rather than ICEWS14s.
I am also unable to run on the 05-15 and GDELT datasets, but my confusion is whether the results in the paper were obtained by testing Mrr after 500 epochs_ The value of the raw parameter.
you can try other dataset, not only ICEWS14s. And I fail to run other datasets rather than ICEWS14s.
You're right. On ICEWS05-15, I found "val_Mrr_raw:0.46"(expected) in validating, but didn't find it in testing. I added some codes to record the best model.
# main.py
...
if val_result[1][0] > best_val_Mrr_raw:
best_val_Mrr_raw = val_result[1][0]
torch.save(model.state_dict(), save_model_path)
print('#'*10, 'better model found')
print('\tStart testing: ', format_now())
...
Thank your code for this paper, but I failed to reproduce .Can someone tell me where the results in the paper came from? I ran 500 epochs on ICEWS14s and the results are shown in the following figure, but did not achieve the expected results. I slightly modified the code so that ICEWS14s can also be loaded from folder