twistedcubic / HNHN

Hypergraph representation learning: Hypergraph Networks with Hyperedge Neurons.
38 stars 8 forks source link

Problem occur during running the hypergraph.py #6

Open aeroplanepaper opened 2 years ago

aeroplanepaper commented 2 years ago

I used the citeseer data provided in the data file, and run the hypergraph.py, however, the ACC is unstable during multiple tests(15 percent approximately), and the effect of this is much bigger than the alpha and beta hyerparameter, I am wondering if I am using the wrong settings in the training process. The setting is unchanged and only add args = gen_data_cora(args, data_path=data_path, do_val=True) and train(args) Looking forward to your reply, thanks!

twistedcubic commented 2 years ago

Hi @aeroplanepaper, that is interesting, I just pulled the codebase again and ran it out of the box, using args = gen_data_cora(args, data_path=data_path, do_val=True), and consistently got ~6% standard deviation on the accuracy. It's reported as ~~~Mean VAL err 0.35+-0.06 for alpha -0.1 0.1 time 12.763674783706666~~~ by the code (this is for 1 layer on citeseer, more layers have similar std, just takes longer).

Is this not what you observe using the code?

aeroplanepaper commented 2 years ago

Hi @aeroplanepaper, that is interesting, I just pulled the codebase again and ran it out of the box, using args = gen_data_cora(args, data_path=data_path, do_val=True), and consistently got ~6% standard deviation on the accuracy. It's reported as ~~~Mean VAL err 0.35+-0.06 for alpha -0.1 0.1 time 12.763674783706666~~~ by the code (this is for 1 layer on citeseer, more layers have similar std, just takes longer).

Is this not what you observe using the code?

Thanks for your reply! In my tests, the results seemed even worse, the lowest error is around 0.35 and in worst cases, the error can reach up to 0.51. It makes me confused about the experiments on alpha and beta normalization parameters choosing, where the fluctuation is only about 0.01 and 0.02.

twistedcubic commented 2 years ago

Hi @aeroplanepaper, that is indeed surprising. What parameter are you varying that led to this variance? And on which dataset? Based on what I tested, previously (by re-pulling this repo) and last week as described in above comment, the results across trials are consistent.