Closed mchen644 closed 6 months ago
Hi, thanks for your interest in our work. For the no-graph case, we do not use the edge-level regularization loss that requires supervised edge information.
In fact, how to learn a graph structure from scratch without any observed information is an open question. To address the high degree of freedom, one can introduce some regularization loss (e.g., sparsity, low-rank, etc.) based on the property of graph structures or some self-supervised signals (e.g., consistency).
Hello, thank you for your very awesome work! I just come up with one question, if there is no input graph, does that mean we can not construct any edge-level regularization loss? If so, is that enough to train the model with relatively high degree of freedom? And I want to find out if there are some suggestions on how to address this problem if there is no input graph and not enough supervised information? Thank you!