Closed billkunghappy closed 4 years ago
Hello @billkunghappy
Thanks for your interest.
First Problem: Each turn in a dialog contains 100 candidates that need to be scored and ranked. At test time, you would not know the "ground truth" candidate and thus need to score each candidate independently. Further, the scoring function to use is completely up to you. Using cross_entropy_loss
is one way when training the model as a conditional language model. For this choice of scoring function, you have to use the candidate as both the input and target as you have no knowledge of the "ground truth".
Second Problem: During retrieval, one does not feed the candidate sentence to "generate" it but to score its likelihood under the model. If this is what you're talking about, then you feed the actual candidate tokens (not those predicted by the model) ground truth tokens to obtain the probability of the next token in the candidate given the previous ones (teacher forcing).
Hope this answers your questions.
P.S.: Edited to avoid overload of the word "ground truth".
Thanks @satwikkottur For the second problem, you said to feed the ground truth token to obtain the probability of the next token But in first problem, you said At test time, you would not know the "ground truth" candidate The question is that since we don't have the ground truth during testing, how are we able to feed the ground truth into the model and acquire the candidate's probability?
Hello @billkunghappy ,
I edited the above comment to add more clarity. Hope this addresses your question.
Hi, I have some questions about retrieval evaluation of the response generation. I'm trying to write the retrieval evaluation scripts for mm_dst to evaluate reponse text, hope to clarify some details. I can understand this part of the code.
The question is about how each score(
round_datum[i], i in [ 0,len(round_datum) )
) is calculated. As I understood, there are 100 candidate for each turn in a dialogue.The first problem:
The second problem: