Closed kbs391kbs closed 3 years ago
Hi,
A general guideline for GMPNN is to try out different methods for aggregation and scoring functions (m-DistMult, H-SimplE, HypE, BoxE, etc.). Table 4 in CompGCN paper (https://openreview.net/forum?id=BylA_C4tPr) shows similar experiments for the binary FB15k-237 dataset. JF-IND (contains purely non-binary relations) and the original JF17K might be very different (30 relations vs. 300+ relations).
Best, Y Naganand
Thank you for your guideline. Yes, JF-IND and JF17K are quite different, and GMPNN may need to try different scoring functions as well as larger hyper-parameters (like --nr). But it still provides a good work for inductive learning on N-ary datasets!
Hi Naganand, I have read your paper and run the code locally. The inductive experiment on N-ary dataset is quite novel to the area, while I am wondering if you have done some transductive experiments on N-ary datasets to evaluate the robustness of G-MPNN. I follow the setting of G-MPNN on JF17K-in, and run it on original JF17K (which is tranductive setting). However, the achieved MRR is below 0.25, while SOTA peformance is about 0.50 in BoxE (https://github.com/ralphabb/BoxE) and HypE (https://github.com/baharefatemi/HypE).
Could you provide some hyper-parameter setting advice or results on transductive setting? Or whether it's ok to apply G-MPNN in transductive N-ary datasets.
Thanks!