This is the code of paper Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation. Huarui He, Jie Wang, Zhanqiu Zhang, Feng Wu. SIGKDD 2022. [arXiv]
First, download teacher knowledge from Google Drive
python download_teacher_knowledge.py --data_name=<dataset>
python download_teacher_knowledge.py --data_name=cora
Second, pleaes run the commands in node-level/README.md
or graph-level/README.md
to reproduce the results.
GraphAKD
├─ README.md
├─ download_teacher_knowledge.py
├─ datasets
│ └─ ...
├─ distilled
│ ├─ cora-knowledge.pth.tar
│ └─ ...
├─ graph-level
│ ├─ README.md
│ └─ stu-gnn
│ ├─ conv.py
│ ├─ gnn.py
│ └─ main.py
└─ node-level
├─ README.md
├─ stu-cluster-gcn
│ ├─ dataset
│ │ ├─ ogbn-products_160.npy
│ │ └─ yelp_120.npy
│ ├─ gcnconv.py
│ ├─ models.py
│ ├─ sampler.py
│ └─ train.py
└─ stu-gcn
├─ gcn.py
├─ gcnconv.py
└─ train.py
If you find this code useful, please consider citing the following paper.
@inproceedings{KDD22_GraphAKD,
author={Huarui He and Jie Wang and Zhanqiu Zhang and Feng Wu},
booktitle={Proc. of SIGKDD},
title={Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation},
year={2022}
}
We refer to the code of DGL. Thanks for their contributions.