Closed hjlin0515 closed 4 years ago
Hi Haojie,
I run the code on DD. The performance is about 82.0. I think you can increase the drop rate of GCN to 0.35 in ops.py line 117.
Best,
Hongyang
On Sun, Jul 21, 2019 at 9:20 AM Haojie Lin notifications@github.com wrote:
I have ran the "run_GUNet.sh DD 0" for many times and the graph classification accuracy only reach 81.2% (The corresponding result reported in the paper is 82.43%). Is there anything wrong in the hyper-parameter settings or something else? And It seems normal in other two datasets. Thank you.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/HongyangGao/gunet/issues/6?email_source=notifications&email_token=ABQZZL75RIR7B55DQYA3ZXLQASEDRA5CNFSM4IFS2LEKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HAP2GRQ, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQZZL3IGAGDIHQTWGEUC2DQASEDRANCNFSM4IFS2LEA .
Thank you very much for your answer. I increase the the dropout rate to 0.35. After many attempts, the performance is about 81.4. And is it appropriate to choose the best acc in all epochs as the final testing result? It seems that the testing result is not much stable.
Hi,
I am sorry for the results. I will check it again. The performance report method is widely used such as GIN. The graph data are not as large as ImageNet which can provide stable performance evaluation. So the community currently select the best average performance for 10-fold experiments.
Best,
Hongyang
On Wed, Jul 24, 2019 at 7:12 PM Haojie Lin notifications@github.com wrote:
Thank you very much for your answer. I increase the the dropout rate to 0.35. After many attempts, the performance is about 81.4. And is it appropriate to choose the best acc in all epochs as the final testing result? It seems that the testing result is not much stable.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/HongyangGao/gunet/issues/6?email_source=notifications&email_token=ABQZZLZXFDLUW2QEYXM4LVTQBEDXDA5CNFSM4IFS2LEKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2YD3PI#issuecomment-514866621, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQZZLZASRK2G3WQ7JD277LQBEDXDANCNFSM4IFS2LEA .
Thank you for your explanation!
Hi, Does the "best average performance for 10-fold" means average the 10-fold performance in different iterations and choose the best performance among overall iterations(like DiffPool the method you compared) ? However, in your code, you choose the best performance in each fold first and then average these best performance as the evaluation metric. And I can not find the the code of corresponding computing in the GIN code, can you provide the link? Thank you very much!
I have ran the "run_GUNet.sh DD 0" for many times and the graph classification accuracy only reach 81.2% (The corresponding result reported in the paper is 82.43%). Is there anything wrong in the hyper-parameter settings or something else? And It seems normal in other two datasets. Thank you.