Diego999 / pyGAT

Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
MIT License
2.92k stars 692 forks source link

The test accuracy is pretty low (53.5%) for citeseer dataset #58

Open erichanxin opened 3 years ago

erichanxin commented 3 years ago

I switched the dataset to citeseer, and trained for 100 epochs, the 99th epoch training log is: Epoch: 0099 loss_train: 0.9922 acc_train: 0.7000 loss_val: 1.0769 acc_val: 0.7100 time: 38.2587s

However, the finial test result is: Test set results: loss= 1.2956 accuracy= 0.5350

The accuracy is pretty low, not like the 72% shown in the paper

SuperBruceJia commented 3 years ago

I switched the dataset to citeseer, and trained for 100 epochs, the 99th epoch training log is: Epoch: 0099 loss_train: 0.9922 acc_train: 0.7000 loss_val: 1.0769 acc_val: 0.7100 time: 38.2587s

However, the finial test result is: Test set results: loss= 1.2956 accuracy= 0.5350

The accuracy is pretty low, not like the 72% shown in the paper

Have you solved the problem?

MichailChatzianastasis commented 3 years ago

Is there any update on this issue? I also obtain low performance on citeseer.

erichanxin commented 3 years ago

Actually NO! I cannot find any method to solve this problem.

LQchen1 commented 2 years ago

Based on your epoch log, it looks like the model is overfitting, maybe try changing the dropout parameter