When perform inference using pretrained checkpoints, I found the NLG in function e2e_batch_generate using the encoded golden context as inputs. In other words, the model infers with the golden previous system responses instead of the system responses generated by model.
I don't know if this problem is due to a misunderstanding that I didn't read the code carefully.
When perform inference using pretrained checkpoints, I found the NLG in function e2e_batch_generate using the encoded golden context as inputs. In other words, the model infers with the golden previous system responses instead of the system responses generated by model.
I don't know if this problem is due to a misunderstanding that I didn't read the code carefully.