sidhantls / differentiable-kb-qa

Implementation of Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base
3 stars 2 forks source link
knowledge-base machine-learning question-answering

QA on Differentiable Knowledge Bases- Reified KB

This is an unofficial adaption of the work presented in Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base- Paper. Utilizes a transformer based encoder, similar to the work here, as opposed to word2vec which is how it was implemented in the Symbolic KB paper

The purpose of this implementation is to provide insights into the implementation level information of these papers and the ideas asscociated with it - such as enabling a differentiable KG through the "follow" operation.

Implemenation

Implements a QA model to retrieve questions from a KB. This consists of an encoder, MiniLM-6/MiniLM-12, which encodes a question. And then, it performs the differentiable query operation on the knowledge base as mentioned in the reified KB paper above. This encode archicture is different from the paper in that it's not based on a word2vec model, but a tranformer. We fine-tune a MiniLM-6 Sentence Transformer - primarily because of the memory and compute efficiency. Can replace this with any other model from the huggingface library

How to run:

Benchmarks

Bencharked on the MetaQA dataset, similar to the other symbolic kb paper linked above

MetaQA Hit @k =1
1-hop 0.977
2-hop 0.787
3-hop 0.821

One expectation was that 3 hop performance to be worse than 2-hop, which was the case in reified kb paper as well. The reified KB paper does report higher 2-hop performance, but this 1-hop and 3-hop outperforms it.

User Experimentation

The purpose of this repository is to provide a baseline to users to implement improvements in this space of question answering with reified differential KBs

To add

Installation

Requires pytorch-lightning >= 1.5.0, pytorch >= 1.7, tqdm