zwt233 / NDLS

Node Dependent Local Smoothing for Scalable Graph Learning (NeurIPS'21, Spotlight)
20 stars 1 forks source link

Inductive setting #3

Open sungeun532 opened 2 years ago

sungeun532 commented 2 years ago

What is specific training process for inductive setting? If new node (unseen in training stage) comes in the test stage, the adjacency matrix becomes different from the training stage, and LSI value of each node is also different, then wouldn't it be impossible to use the optimized MLP parameter based on the training graph structure??

Doehong commented 11 months ago

hi, I'm looking for a way to inductive experiment. did you get that? i have the same confuse

zwt233 commented 11 months ago

hi, I'm looking for a way to inductive experiment. did you get that? i have the same confuse

Hi, the process to get the LSI value of each node is non-parametric and training-free (can be treated as the data pre-prosessing step). And the main motivation of LSI is to get a good and node-adaptive smoothed features. The performance bottleneck in most graph datasets is the feature representation rather than the downstream model. Through our experiments, we find the performance influence of MLP parameters is very small, and you can even train an XGBoost model to get similar performance. So, we use the optimized MLP parameter based on the training graph structure in our experiments.