Pytorch implementation of our EMNLP 2020 paper: Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning.
We propose a neural symbolic method for knolwege graph reasoning that leverages symbolic rules.
Walk-based models have shown their advantages in knowledge graph (KG) reasoning by achieving decent performance while providing interpretable decisions. However, the sparse reward signals offered by the KG during traversal are often insufficient to guide a sophisticated walk-based reinforcement learning (RL) model. An alternate approach is to use traditional symbolic methods (e.g., rule induction), which achieve good performance but can be hard to generalize due to the limitation of symbolic representation. In this paper, we propose RuleGuider, which leverages high-quality rules generated by symbolicbased methods to provide reward supervision for walk-based agents. Experiments on benchmark datasets show that RuleGuider improves the performance of walk-based models without losing interpretability.
If you find the repository or ruleGuider helpful, please cite the following paper
@inproceedings{lei2020ruleguider,
title={Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning},
author={Lei, Deren and Jiang, Gangrong and Gu, Xiaotao and Sun, Kexuan and Mao, Yuning and Ren, Xiang},
journal={EMNLP},
year={2020}
}
Install PyTorch (>= 1.4.0) following the instructions on the PyTorch. Our code is written in Python3.
Run the following commands to install the required packages:
pip3 install -r requirements.txt
Unpack the data files:
unzip data.zip
It will generate three dataset folders in the ./data directory. In our experiments, the datasets used are: fb15k-237
, wn18rr
and nell-995
.
./experiment-emb.sh configs/<dataset>-<model>.sh --train <gpu-ID>
./experiment-pretrain.sh configs/<dataset>-rs.sh --train <gpu-ID> <rule-path> --model point.rs.<embedding-model>
./experiment-rs.sh configs/<dataset>-rs.sh --train <gpu-ID> <rule-path> --model point.rs.<embedding-model> --checkpoint_path <pretrain-checkpoint-path>
Note:
conve
, complex
, and distmult
.--board <board-path>
to logs the training details, --model <model-path>
to assign the directory in which checkpoints are saved, and --checkpoint_path <checkpoint-path>
to load checkpoints.--rule_ratio <ratio>
to specify the ratio between rule reward and hit reward../experiment-emb.sh configs/<dataset>-<model>.sh --inference <gpu-ID>
./experiment-pretrain.sh configs/<dataset>-rs.sh --inference <gpu-ID> <rule-path> --model point.rs.<embedding-model> --checkpoint_path <pretrain-checkpoint-path>
./experiment-rs.sh configs/<dataset>-rs.sh --inference <gpu-ID> <rule-path> --model point.rs.<embedding-model> --checkpoint_path <checkpoint-path>