Open Andrew-Tuen opened 3 years ago
I think they are all regulizations but different in implementations or fomular details (Ncontrast loss is derived from contrastive loss which utilize negtive samples). I think lots of regularizations work for most node classification tasks.
We tested the graph-mlp on ogb arxiv but not work, a more powerful regularization may be needed, which is our future work. (or regularization method is not that promising than message passing structure, now I even think model structure is better than loss)
Hi, are you saying that this Graph-MLP model is very similar to the Line GNN model proposed in this paper: Line Graph Neural Networks for Link Prediction? Have you tried both of them?
Not the Line Graph Neural Networks for Link Prediction, this: LINE: Large-scale Information Network Embedding
LINE model use the graph structure to generate the node embedding like pre-train, and the embedding is used for multiple downstream tasks. I think this method is only use original feature as the ramdom initialized embedding and combine the two-step training phase into end-to end training phase.