Closed timruhkopf closed 2 years ago
Hi,
I can't really help with AutoGL as I don't know how it works, but can point you to another repo with cleaner implementation where we used StarE as one of compared models for another paper.
As to the Q2: yes, the StarEEncoder with forward_base is the actual GNN encoder. The link prediction decoder is a transformer with average pooling and dot product against the updated node representations.
Thank you for the referral!
i need to get the hang of it still, but it's indeed cleaner. thank you!
Hi there!
Currently I am trying to work myself through your code to make it accessible for AutoGL in order to optimize your hyperparameters in a principled and model driven manner.
For that matter, I would really appreciate it, if you could provide more documentation on the various models & methods (aside the papers abstract descriptions) & disentangle the run.py such that you have a minimal set up for the main StarE (H) + Transformer (H) setup on your WD50K_X datasets. This would greatly improve the reusability of your module and surely other researchers would appreciate it when benchmarking against your model. A show case in form of an attached Readme.md or a jupyter notebook would be handy in for that matter.
I do have a detail question as well: In order to make the model accessible to AutoGL, I need to provide a class with the following Link prediction interface which includes the lp_encode & lp_decode method (which are based on the forward method). From what I gather, the encoder part is StarE - and I think it should suffice to use StarEEncoder.forward_base's else branch (not for triples) and all that follows. If that is correct, than StarE_Transformer.forward line 80ff should be the transformer part which would be lp_decode. Right?
Thanks in advance.