yanghu819 / Graph-MLP

73 stars 21 forks source link

How to keep robust when still use the adjacency information implicitly #6

Open alvinsun724 opened 3 years ago

alvinsun724 commented 3 years ago

Hi, your work is really inspiring and I have one question.

In the paper, you were saying the model would be more robust when facing large-scale graph data and corrupted adjacency information, as it utilizes the adjacency information implicitly, rather like GCN which uses adjacency information directly during the information aggregation phase.

However, I am wondering _you still use the adjacency information_ (even multiply 4 times is possible: 4th power of adj) in calculation Ncontrast Loss, how would this maintain robust performance with massive corrupted adjacency information, given you still need the adjacency information in Ncontrast loss in training?

Is that becuase you only need adjacency information during training rather than both train and test phase? Or some other reason to justify?

I am really confused about that and look forward to your reply.

Thanks a lot

yanghu819 commented 2 years ago

Hi,I think this issue helps: https://github.com/yanghu819/Graph-MLP/issues/5.

alvinsun724 commented 2 years ago

Hi,I think this issue helps: #5.

Thanks. Is my understanding correct that difference between the performance of the Graph-MLP and GCN, is mainly due to no adjacency information utilized in the test phase of Graph-MLP?