启智社区(中文版) | OpenHGNN [CIKM2022] | Space4HGNN [SIGIR2022] | Benchmark&Leaderboard | Slack Channel
This is an open-source toolkit for Heterogeneous Graph Neural Network based on DGL [Deep Graph Library] and PyTorch. We integrate SOTA models of heterogeneous graph.
1. Python environment (Optional): We recommend using Conda package manager
conda create -n openhgnn python=3.6
source activate openhgnn
2. Install Pytorch: Follow their tutorial to run the proper command according to your OS and CUDA version. For example:
pip install torch torchvision torchaudio
3. Install DGL: Follow their tutorial to run the proper command according to your OS and CUDA version. For example:
pip install dgl -f https://data.dgl.ai/wheels/repo.html
4. Install openhgnn:
install from pypi
pip install openhgnn
install from source
git clone https://github.com/BUPT-GAMMA/OpenHGNN
# If you encounter a network error, try git clone from openi as following.
# git clone https://git.openi.org.cn/GAMMALab/OpenHGNN.git
cd OpenHGNN
pip install .
5. Install gdbi(Optional):
install gdbi from git
pip install git+https://github.com/xy-Ji/gdbi.git
install graph database from pypi
pip install neo4j==5.16.0
pip install nebula3-python==3.4.0
python main.py -m model_name -d dataset_name -t task_name -g 0 --use_best_config --load_from_pretrained
usage: main.py [-h] [--model MODEL] [--task TASK] [--dataset DATASET] [--gpu GPU] [--use_best_config][--use_database]
optional arguments:
-h, --help
show this help message and exit
--model -m
name of models
--task -t
name of task
--dataset -d
name of datasets
--gpu -g
controls which gpu you will use. If you do not have gpu, set -g -1.
--use_best_config
use_best_config means you can use the best config in the dataset with the model. If you want to
set the different hyper-parameter, modify the openhgnn.config.ini manually. The best_config
will override the parameter in config.ini.
--load_from_pretrained
will load the model from a default checkpoint.
--use_database
get dataset from database
---mini_batch_flag
train model with mini-batchs
---graphbolt
mini-batch training with dgl.graphbolt
---use_distributed
train model with distributed way
e.g.:
python main.py -m GTN -d imdb4GTN -t node_classification -g 0 --use_best_config
python main.py -m RGCN -d imdb4GTN -t node_classification -g 0 --mini_batch_flag --graphbolt
Note: If you are interested in some model, you can refer to the below models list.
Refer to the docs to get more basic and depth usage.
tensorboard --logdir=./openhgnn/output/{model_name}/
e.g.:
tensorboard --logdir=./openhgnn/output/RGCN/
Note: To visualize results, you need to train the model first.
take neo4j and imdb dataset for example
construct csv file for dataset(node-level:A.csv,edge-level:A_P.csv)
import csv file to database
LOAD CSV WITH HEADERS FROM "file:///data.csv" AS row
CREATE (:graphname_labelname {ID: row.ID, ... });
add user information to access database in config.py file
self.graph_address = [graph_address]
self.user_name = [user_name]
self.password = [password]
e.g.:
python main.py -m MAGNN -d imdb4MAGNN -t node_classification -g 0 --use_best_config --use_database
The link will give some basic usage.
Model | Node classification | Link prediction | Recommendation |
---|---|---|---|
TransE[NIPS 2013] | :heavy_check_mark: | ||
TransH[AAAI 2014] | :heavy_check_mark: | ||
TransR[AAAI 2015] | :heavy_check_mark: | ||
TransD[ACL 2015] | :heavy_check_mark: | ||
Metapath2vec[KDD 2017] | :heavy_check_mark: | ||
RGCN[ESWC 2018] | :heavy_check_mark: | :heavy_check_mark: | |
HERec[TKDE 2018] | :heavy_check_mark: | ||
HAN[WWW 2019] | :heavy_check_mark: | :heavy_check_mark: | |
KGCN[WWW 2019] | :heavy_check_mark: | ||
HetGNN[KDD 2019] | :heavy_check_mark: | :heavy_check_mark: | |
HeGAN[KDD 2019] | :heavy_check_mark: | ||
HGAT[EMNLP 2019] | |||
GTN[NeurIPS 2019] & fastGTN | :heavy_check_mark: | ||
RSHN[ICDM 2019] | :heavy_check_mark: | :heavy_check_mark: | |
GATNE-T[KDD 2019] | :heavy_check_mark: | ||
DMGI[AAAI 2020] | :heavy_check_mark: | ||
MAGNN[WWW 2020] | :heavy_check_mark: | ||
HGT[WWW 2020] | :heavy_check_mark: | ||
CompGCN[ICLR 2020] | :heavy_check_mark: | :heavy_check_mark: | |
NSHE[IJCAI 2020] | :heavy_check_mark: | ||
NARS[arxiv] | :heavy_check_mark: | ||
MHNF[arxiv] | :heavy_check_mark: | ||
HGSL[AAAI 2021] | :heavy_check_mark: | ||
HGNN-AC[WWW 2021] | :heavy_check_mark: | ||
HeCo[KDD 2021] | :heavy_check_mark: | ||
SimpleHGN[KDD 2021] | :heavy_check_mark: | ||
HPN[TKDE 2021] | :heavy_check_mark: | :heavy_check_mark: | |
RHGNN[arxiv] | :heavy_check_mark: | ||
HDE[ICDM 2021] | :heavy_check_mark: | ||
HetSANN[AAAI 2020] | :heavy_check_mark: | ||
ieHGCN[TKDE 2021] | :heavy_check_mark: | ||
KTN[NIPS 2022] | :heavy_check_mark: |
OpenHGNN Team[GAMMA LAB], DGL Team and Peng Cheng Laboratory.
See more in CONTRIBUTING.
If you use OpenHGNN in a scientific publication, we would appreciate citations to the following paper:
@inproceedings{han2022openhgnn,
title={OpenHGNN: An Open Source Toolkit for Heterogeneous Graph Neural Network},
author={Hui Han, Tianyu Zhao, Cheng Yang, Hongyi Zhang, Yaoqi Liu, Xiao Wang, Chuan Shi},
booktitle={CIKM},
year={2022}
}