sunnweiwei / MixCL

Contrastive Learning Reduces Hallucination in Conversations
16 stars 1 forks source link

Contrastive Learning Reduces Hallucination in Conversations

Code for paper Contrastive Learning Reduces Hallucination in Conversations.

We propose MixCL, a contrastive learning framework to reduce the hallucination of LM-based knowledge-grounded dialogue systems.

Figure

Models

The code for extrating spans is available at mixup.py, where we use stanza and spacy to identify entities and constituencies in text.

The code for model training and testing is available at run.py

Datasets

The dataset (i.e., Wizard-of-Wikipedia) is placed in /dataset, and /utils provides the code for IO and evaluation.

Evaluation

We provide an example of the outputs of models on WoW seen at outputs_on_seen.txt

Cite

@inproceedings{Sun2023ContrastiveLR,
  title={Contrastive Learning Reduces Hallucination in Conversations},
  author={Weiwei Sun and Zhengliang Shi and Shen Gao and Pengjie Ren and M. de Rijke and Zhaochun Ren},
  booktitle={AAAI Conference on Artificial Intelligence},
  year={2023},
  pages={13618--13626}
}