VITA-Group / Graph-Mixture-of-Experts

[NeurIPS'23] Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit Diversity Modeling. Haotao Wang, Ziyu Jiang, Yuning You, Yan Han, Gaowen Liu, Jayanth Srinivasa, Ramana Rao Kompella, Zhangyang Wang
MIT License
30 stars 2 forks source link

About the implementation of k-hop experts #2

Closed XiaobinHong closed 6 months ago

XiaobinHong commented 6 months ago

I appreciate your work graph hops experts! I noticed in the code that the implementation of MoE is a multi-channel GNN, then utilize sparse attention for fusion. Regarding the blending of -hop and 2-hop experts mixture in the paper, may I ask how it should be understood in the corresponding code implementation?

Thanks a lot!