dair-iitd / OxKBC

State-of-the-art models for Knowledge Base Completion (KBC) for large KBs (such as FB15k1and YAGO) are based on tensor factorization (TF), e.g, DistMult, ComplEx. While they produce2good results, they cannot expose any rationale behind their predictions, potentially reducing the3trust of a user in the outcome of the model. Previous works have explored creating an inherently4explainable model, e.g. Neural Theorem Proving (NTP), DeepPath, MINERVA, but explainability5in them comes at the cost of performance. Others have tried to create an auxiliary explainable6model having high fidelity with the underlying TF model, but unfortunately, they do not scale well7to large KBs. In this work, we proposeOXKBC– anOutcome eXplanation engine forKBC,8which provides a post-hoc explanation for every triple inferred by a (uninterpretable) factorization9based model. It first augments the underlying Knowledge Graph by introducing weighted edges10between entities based on their similarity given by the underlying model. It then defines a notion11of human-understandable explanation paths along with a language to generate them. Depending12on the edges, the paths are aggregated into second–order templates for further selection. The best13template with its grounding is then selected by a neural selection module that is trained with minimal14supervision by a novel loss function. Experiments over Mechanical Turk demonstrate that users15overwhelmingly find our explanations more trustworthy compared to rule mining.
MIT License
7 stars 1 forks source link