matenure / FastGCN

The sample codes for our ICLR18 paper "FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling""
519 stars 111 forks source link

Test accuracy on Cora, Citeseer, and Pubmed using original split #10

Open JsonAC opened 5 years ago

JsonAC commented 5 years ago

Hi, I plan to run pubmed-original_transductive_FastGCN.py on Cora 100 times and change the random seed from 0 to 99. I get the result (until seed = 64) avg: 79.8% max: 80.6% min: 78.4% std: 0.46%

This is much lower than the reported results in your paper (you only report 81.8%, I don't know it's avg or max, and the how many times you run).

I will run code on Citeseer and Pubmed too.

Do I need to change the hyperparameters? Can you tell me the hyperparameters on this three dataset with original data split so I can get results comparable with GCN.

Thank you very much!

JsonAC commented 5 years ago

The accuracy on Pubmed is close to the report result (77.6)

Tiiiger commented 5 years ago

I am having the same problem.

matenure commented 5 years ago

As I mentioned in other issues, the learning rate is one of the most important hyperparameters here. For cora, the best learning rate is 0.001. You can also increase the total epoch to 200.

JsonAC commented 5 years ago

For cora, the new result is avg 81.4 +-0.5

cai-lw commented 5 years ago

I am getting similar results as yours. But in the paper, they only reported F1 scores, not accuracy. The code, on the other hand, only calculates accuracy but not F1 scores, but the numeric value of accuracy is very close to those F1 scores reported in the paper.

@matenure Is it a typo in the paper, or we need to modify the code to get F1 scores?