Code for paper Contrastive Learning Reduces Hallucination in Conversations.
We propose MixCL, a contrastive learning framework to reduce the hallucination of LM-based knowledge-grounded dialogue systems.
The code for extrating spans is available at mixup.py
, where we use stanza and spacy to identify entities and constituencies in text.
The code for model training and testing is available at run.py
The dataset (i.e., Wizard-of-Wikipedia) is placed in /dataset
, and /utils
provides the code for IO and evaluation.
We provide an example of the outputs of models on WoW seen at outputs_on_seen.txt
@inproceedings{Sun2023ContrastiveLR,
title={Contrastive Learning Reduces Hallucination in Conversations},
author={Weiwei Sun and Zhengliang Shi and Shen Gao and Pengjie Ren and M. de Rijke and Zhaochun Ren},
booktitle={AAAI Conference on Artificial Intelligence},
year={2023},
pages={13618--13626}
}