aliannejadi / ClariQ

ClariQ: SCAI Workshop data challenge on conversational search clarification.
131 stars 26 forks source link

Question about "Clarification Need" Annotation Guidelines #9

Open samanehrad opened 1 month ago

samanehrad commented 1 month ago

Hi, I have a question about clarification need feature in your dataset. I understand that higher scores (1, 2, 3, 4) indicate more clarification needed, but I don't understand how these values were specifically assigned. Could you please explain: What criteria were used to determine if a query needs more or less clarification? How did annotators decide between different levels? I've read the paper but couldn't find detailed guidelines for this specific feature. Thanks.

samanehrad commented 1 month ago

Hi, I have a question about clarification need feature in your dataset. I understand that higher scores (1, 2, 3, 4) indicate more clarification needed, but I don't understand how these values were specifically assigned. Could you please explain: What criteria were used to determine if a query needs more or less clarification? How did annotators decide between different levels? I've read the paper but couldn't find detailed guidelines for this specific feature. Thanks.

Hello, thank you for your excellent paper and dataset. I wanted to remind you about the question I had asked earlier. Best regards, Samaneh

aliannejadi commented 2 weeks ago

Dear Samaneh, Thank you for your interest and question. We defined "1" as queries that do not need any clarification and basically the system can confidently answer the user's question, while "4" absolutely needing clarification, as it would be impossible to answer the question. "3" are the questions that the system can provide some answers, but would still benefit from clarification greatly. Label "2" are those examples where the system can confidently answer the user's question but would benefit from additional clarification (e.g., facet-based questions mainly)