Source codes for the paper Cognitive Graph for Multi-Hop Reading Comprehension at Scale. (ACL 2019 Oral)
We also have a Chinese blog about CogQA on Zhihu (知乎) besides the paper.
CogQA is a novel framework for multi-hop question answering in web-scale documents. Founded on the dual process theory in cognitive science, CogQA gradually builds a cognitive graph in an iterative process by coordinating an implicit extraction module (System 1) and an explicit reasoning module (System 2). While giving accurate answers, our framework further provides explainable reasoning paths.
improved_retrieval.zip
in this repo.pip install -r requirements.txt
python read_fullwiki.py
to load wikipedia documents to redis (check the size of dump.rdb
in the redis folder is about 2.4GB).python process_train.py
to generate hotpot_train_v1.1_refined.json
, which contains edges in gold-only cognitive graphs.mkdir models
The codes automatic assign tasks on all available devices, each handling batch_size / num_gpu
samples. We recommend that each gpu has at least 11GB memory to hold 2 batch.
python train.py
to train Task #1(span extraction).python train.py --load=True --mode='bundle'
to train Task #2(answer prediction).The cogqa.py
is the algorithm to answer questions with a trained model. We split the 1-hop nodes found by another similar model into improved_retrieval.zip
for reuse in other algorithm. It can directly improve your result on fullwiki setting by just replacing the original input.
unzip improved_retrieval.zip
.
python cogqa.py --data_file='hotpot_dev_fullwiki_v1_merge.json'
python hotpot_evaluate_v1.py hotpot_dev_fullwiki_v1_merge_pred.json hotpot_dev_fullwiki_v1_merge.json
You can check the cognitive graph (reasoning process) in the cg
part of the predicted json file.