Closed ahtsham58 closed 4 years ago
You can run the test scripts. E.g. python single_dialog.py --train False --task_id 1
for the Memory Network.
If you are asking how I created the attention visualizations, it was a lot of manual work. I remember saving the attention weights for each test dialog in a separate .txt file when I ran inference. Next, I manually created the tables shown in the paper and filled in the sentence from the test set as well as the attention values at each hop.
So, What I understood is that generating dialogue conversation between user and bot is not possible using the model(s).
Experimentation in the paper includes only accuracy scores, so dialogue quality is still unanswered?
Is that correct? please confirm.
If you revisit this paper as well as the original Memory Network work by FAIR, you'd find that all these works are for retrieval-based dialog systems. For generative models, follow the line of work from the SEQ2SEQ/Neural Conversational Model papers.
Maybe you can find something of interest here: https://github.com/chaitjo/personalized-dialog/issues/7
ok. thanks.
Hi @chaitjo ,
I trained the models successfully and I could see model logs for each task with their losses values and other metrics.
I want to analyze the model performance in predicting the next sentence and recommending the item to the user based on user dialogue history as claimed in the paper. You also presented some dialogue examples for each task in the paper (Appendix).
Could you please give me a head start to find dialogue examples between a user and a bot like you presented in the Appendix ?