THUDM / GraphMAE

GraphMAE: Self-Supervised Masked Graph Autoencoders in KDD'22
478 stars 75 forks source link

Transfer Learning #20

Closed DuanhaoranCC closed 2 years ago

DuanhaoranCC commented 2 years ago

Hi and thanks for your work! What is the difference between GraphMAE and the baseline model AttrMask in transfer learning?

THINK2TRY commented 2 years ago

@DuanhaoranCC Thanks for your attention. There are two differences between GraphMAE and AttrMask:

  1. GraphMAE uses a GNN as the decoder, and AttrMask uses a linear layer for decoding.
  2. Scaled cosine error (SCE) is used as the criterion in GraphMAE rather than cross-entropy in AttrMask.

We haven't conducted rigorous ablation studies to measure the single contribution of each component, but the overall training paradigm of GraphMAE achieves better results than AttrMask.