Open Lemour-sudo opened 3 years ago
Looks like it. Graph's structure I think comes into play in the loss function only. Also, since you were able to reproduce, did you try the attribute encoder as an MLP as mentioned in the paper? In the code it looks like only a linear layer is used.
Yes, I do believe the the graph's structure is only accounted for in the loss function in the authors' original code. I had to refactor the code a bit to allow changing between attribute encoders and structure encoders. So I managed to try an MLP as an attribute encoder.
Thanks, was able to figure out MLP.
Also, another thing, while reproducing did you figure out , in the ind_eval()
function here https://github.com/working-yuhao/DEAL/blob/e58b2601b6102e2ebc80f20e7a92343c9e08daec/utils.py#L673, node_emb
is a clone of anode_emb
, so how does attr_layer
and inter_layer
behave any differently in that case?
They way I see it, the attr_layer
represents the attribute model part, and inter_layer
may be meant to represent the layer that connects the attribute and structure parts.
Kindly assist on these two problems:
The results for PPI barely reach 0.5 for both the AP and AUC in transductive and inductive settings. Can you kindly share the hyperparameters settings used in the code to assist in easing the reproducibility process.
Thank you.