PetarV- / GAT

Graph Attention Networks (https://arxiv.org/abs/1710.10903)
https://petar-v.com/GAT/
MIT License
3.15k stars 643 forks source link

What is difference between transductive and inductive in GNN? #48

Closed guotong1988 closed 4 years ago

guotong1988 commented 4 years ago

It seems in GNN(graph neural network), in transductive situation, we input the whole graph and we mask the label of valid data and predict the label for the valid data.

But is seems in inductive situation, we also input the whole graph(but sample to batch) and mask the label of the valid data and predict the label for the valid data.

Thank you very much. @PetarV-

PetarV- commented 4 years ago

Hi,

Thanks for your issue!

In inductive learning, during training you are unaware of the nodes used for testing. For the specific inductive dataset here (PPI), the test graphs are disjoint and entirely unseen by the GNN during training.

Hope this helps!

Thanks, Petar

guotong1988 commented 4 years ago

Thank you very much.

guotong1988 commented 4 years ago

Thank you for your response. Could you please further explain the difference in model design? Thank you and thank you again.

PetarV- commented 4 years ago

Hello,

I'm not sure I fully understood your query, but assuming you're asking about what are the architectural differences between a transductive and inductive model -- basically, inductive models make no strong assumptions or direct usage of the information of the training graph.

One example of a non-inductive algorithm is the Graph Fourier Transform (https://arxiv.org/abs/1312.6203), which explicitly looks at the graph's Laplace matrix when computing its operations. This means that the learnt GNN layer is dependant on the specific Laplace matrix, and if you see a different matrix at test-time, the filters are unlikely to be re-usable.

Thanks, Petar

guotong1988 commented 4 years ago

Thank you for your response. Yes, I am asking about what are the architectural differences between a transductive and inductive model. I will try to understand your answer. Thank you again.