qitianwu / NodeFormer

The official implementation of NeurIPS22 spotlight paper "NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification"
286 stars 27 forks source link

Regarding the edge-level regularization w/o input graph #16

Closed mchen644 closed 6 months ago

mchen644 commented 9 months ago

Hello, thank you for your very awesome work! I just come up with one question, if there is no input graph, does that mean we can not construct any edge-level regularization loss? If so, is that enough to train the model with relatively high degree of freedom? And I want to find out if there are some suggestions on how to address this problem if there is no input graph and not enough supervised information? Thank you!

qitianwu commented 8 months ago

Hi, thanks for your interest in our work. For the no-graph case, we do not use the edge-level regularization loss that requires supervised edge information.

In fact, how to learn a graph structure from scratch without any observed information is an open question. To address the high degree of freedom, one can introduce some regularization loss (e.g., sparsity, low-rank, etc.) based on the property of graph structures or some self-supervised signals (e.g., consistency).