snap-stanford / pretrain-gnns

Strategies for Pre-training Graph Neural Networks
MIT License
963 stars 162 forks source link

Question about the results #11

Open fregocap opened 4 years ago

fregocap commented 4 years ago

Hi,

Thanks a lot for this amazing article. I just wanted to ask you about the results that you got on regression tasks (lipo, freesolv, esol). Do you have those ? I've tried it on my side and it doesn't look that impressive. However, I might be doing something wrong.

Thanks in advance

weihua916 commented 4 years ago

Hi, thanks for your interest! Yes, we have tried regression tasks, and I remember that pre-trained models still perform better than non-pre-trained models in those tasks. Nonetheless, the performance was indeed not that impressive, as you correctly pointed out. Looking back, the primary reason for the suboptimal performance was that we used the minimal molecular features in this work. We later found in our follow-up work (Open Graph Benchmark, https://arxiv.org/abs/2005.00687) that adding more extra molecular features at data pre-processing stage can greatly improve the performance in both regression and classification.

That being said, it would be cool to explore pre-training/transfer learning on top of OGB, where we have provided unified molecular features with improved experimental results across datasets.