Open jsh-hit opened 1 week ago
Hi there, thanks for the interest! The MPNN-based methods can be deomposed to be transformation (W*X) and aggregation. The MLP-based methods just have the transformation process without the aggregation. For your second question, the MLP can update the node embedding by just using the transformation. Thus the MLP can be treated an ablated version of the MPNN-based methods.
For your first questions, there are several main difference between these two types of methods: The path-based methods learn pair-wise embeddings and the MPNN-based methods learn the node-wise embeddings, i.e., the "node embedding" in the path-based methods might be different in different node pairs. Both of them are trying to aggregate the information from the neighors, but the operations are different. For the path-based methods, it's kind of difficult to compare with the version without aggregation. Because then we might need to compare the node-based embedding with the pair-wise embeddings, which cannot be directly compared.
Best
hi,I have some questions about your paper 1.In your paper, you mentioned: "There are mainly two types of GNN-based KGC methods: Message Passing Neural Networks (MPNNs) and path-based methods. In this work, we focus on MPNN-based models, which update node features through a message passing (MP) process over the graph where each node collects and transforms features from its neighbors." However, as far as I understand, both of these methods essentially aggregate information from neighboring nodes to update the node embeddings. So, in your opinion, how important is the message passing component in path-based methods?
2.I am also having trouble understanding how an MLP can be used to update node embeddings. MPNNs make sense to me in the context of graph-structured data, but I am not quite clear on how an MLP is applied in this scenario. Thank you for your time, and I look forward to your response. Best