Open HarryShomer opened 1 week ago
Hi Harry,
Yes. When you turn on use_embedding,
MPLP will append an embedding vector for each node in the graph.
Node embedding is quite a standard way of performing LP tasks. It can be seen as a variant of the traditional matrix factorization method on LP. The node embedding maintains the node-level complexity and thus makes it comparable to others.
In fact, MPLP hardly relies on node features for LP. On PPA and Citation2, it works better for MPLP to get rid of node features at all.
Thanks. I appreciate the response (also congrats on the acceptance!).
I agree that learnable embeddings have a long history in link prediction.
I guess my concern is that other methods (e.g., NCN, BUDDY, etc..) don't use learnable embeddings. Embeddings vastly increase the # of parameters and are known to increase the performance of GNNs. Because of this I find it difficult to make a 1-1 comparison with the other methods.
I'm curious if you've tried running it without the node embeddings? I understand if not. I'm just wondering as I recently came across your work and found it quite interesting.
Thanks again!
Thanks!
No, we haven't tried without node embedding. but it can be interesting to see how it works. I will find some time to run it
Hey,
I have a question about the code. From my understanding, it seems that by setting the
use_embedding
argument to true further creates an additional d-dimensional learnable Embedding for each node? This learnable embedding is then concatenate with the original node features before message passing?Is this understanding correct?
If it is, my concern is that it makes it hard to compare the performance of MPLP/MPLP+ against other methods that don't use learnable node embeddings and rely only on the node features.
Thanks, Harry