This repository contains the data, scripts and baseline codes for DSTC11 Track 5.
This challenge track aims to support more informative and engaging task-oriented conversations by utilizing the subjective knowledge from review posts. Track participants will develop dialogue systems to understand relevant review posts, and generate system responses grounded on the selected review knowledge snippets.
Organizers: Seokhwan Kim, Spandana Gella, Chao Zhao, Di Jin, Alexandros Papangelis, Behnam Hedayatnia, Yang Liu, Dilek Hakkani-Tur
If you want to publish experimental results with this dataset or use the baseline models, please cite this article:
@misc{zhao2023what,
title={"What do others think?": Task-Oriented Conversational Modeling with Subjective Knowledge},
author={Chao Zhao and Spandana Gella and Seokhwan Kim and Di Jin and Devamanyu Hazarika and Alexandros Papangelis and Behnam Hedayatnia and Mahdi Namazifar and Yang Liu and Dilek Hakkani-Tur},
year={2023},
eprint={2305.12091},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
This challenge track distinguishes between turns that could be handled by the existing task-oriented conversational models with no extra knowledge and turns that require external subjective knowledge to be answered by the dialogue system. We focus on the turns that require knowledge access as the evaluation target in this track by the following three tasks:
Task #1 | Knowledge-seeking Turn Detection |
---|---|
Goal | To decide whether to continue the existing scenario or trigger the knowledge access branch for a given utterance and dialogue history |
Input | Current user utterance, Dialogue context, Knowledge snippets |
Output | Binary class (requires knowledge access or not) |
Task #2 | Knowledge Selection |
---|---|
Goal | To select proper subjective knowledge sources given a dialogue state at each turn with knowledge access |
Input | Current user utterance, Dialogue context, Knowledge snippets |
Output | List of relevant knowledge candidates |
Task #3 | Knowledge-grounded Response Generation |
---|---|
Goal | To take a triple of input utterance, dialog context, and the selected knowledge snippets and generate a system response |
Input | Current user utterance, Dialogue context, and Selected knowledge snippets |
Output | Generated system response |
Participants will develop systems to generate the outputs for each task. They can leverage the annotations and the ground-truth responses available in the training and validation datasets.
In the test phase, participants will be given a set of unlabeled test instances. And they will submit up to 5 system outputs for all three tasks.
NOTE: For teams who are interested in only one or two of the tasks, we recommend to use our baseline system for the remaining tasks to complete the system outputs.
Each submission will be evaluated in the following task-specific automated metrics first:
Task | Automated Metrics |
---|---|
Knowledge-seeking Turn Detection | Precision/Recall/F-measure |
Knowledge Selection | Precision/Recall/F-measure, Accuracy |
Knowledge-grounded Response Generation | BLEU, ROUGE, METEOR |
To consider the dependencies between the tasks, the scores for knowledge selection and knowledge-grounded response generation are weighted by knowledge-seeking turn detection performances. Please find more details from scores.py.
The final ranking will be based on human evaluation results only for selected systems according to automated evaluation scores. It will address the following aspects: appropriateness and relevance to given knowledge.
In this challenge track, participants will use an augmented version of MultiWoz 2.1 which includes newly introduced subjective knowledge-seeking turns. All the ground-truth annotations for Knowledge-seeking Turn Detection and Knowledge Selection tasks as well as the agent's responses for Knowledge-grounded Response Generation task are available to develop the components on the training and validation sets. In addition, relevant knowledge snippets for each domain and entity are also provided in knowledge.json.
In the test phase, participants will be evaluated on the results generated by their models for the unlabeled test set. To evaluate the generalizability and the portability of each model, the unseen test set may include different domains, entities and locales than MultiWoz.
Data and system output format details can be found from data/README.md.
To join the mailing list: visit https://groups.google.com/a/dstc.community/forum/#!forum/list/join
To post a message: send your message to list@dstc.community
To leave the mailing list: visit https://groups.google.com/a/dstc.community/forum/#!forum/list/unsubscribe
Please feel free to contact: seokhwk (at) amazon (dot) com
The code is licensed under Apache 2.0 (see SOFTWARELICENSE) and the data files are licensed under CDLA-Sharing 1.0 (see DATALICENSE).