SongtaoLiu0823 / LAGNN

[ICML 2022] Local Augmentation for Graph Neural Networks
62 stars 14 forks source link

Active Learning Trick? #11

Open JhuoW opened 6 months ago

JhuoW commented 6 months ago

Hi, Songtao,

Thanks for your amazing paper. I am particularly interested in the active learning technique described in Eq.10 of the paper. I find its implementation, as illustrated in the cvae_pretrain.py, somewhat complex. Because it runs several GNN fine-tuning epochs during each mini-batch training iteration of the CVAE.

Appendix B2 mentions that this active learning trick aids in addressing issues related to sampling from the long tail distribution, I find the explanation and the equation itself somewhat non-intuitive. Could you please provide a more detailed explanation of this concept? In particular, I am curious about the significance of the U-score in the overall results, because I think it is complicated. Did you do some ablation study about it?

Thank you for your time and consideration.

SongtaoLiu0823 commented 6 months ago

Hi,

Sorry for late response. I'm so busy this month. Yeah, we need multiple GNNs in cvae_pretrain.py. U-score is to evaluate the quality of samples. So we can use the active learning trick to prevent model generating samples from long-tail distribution. Our experiments (not in this paper) show with different hyper-paratermeters, our model can improve the performance. You also can refer to this paper: Gan data augmentation through active learning inspired sample acquisition.