declare-lab / MIME

This repository contains PyTorch implementations of the models from the paper An Empirical Study MIME: MIMicking Emotions for Empathetic Response Generation.
MIT License
43 stars 11 forks source link

Questions about code reproduction #1

Closed Lireanstar closed 3 years ago

Lireanstar commented 3 years ago

Hello, I successfully ran the program according to the README.md, but due to the patience of the program, my program ended early, and retrained in the stage at about 18000 period. is this normal? It seems the model can't learn the data well, then we get a worse result when testing. When I ran the MIME, I finally got the result: EVAL Loss PPL Accuracy Bleu_g Bleu_b Blue_t valid 3.7737 43.5429 0.33 0.00 0.00 0.00 testing generation: EVAL Loss PPL Accuracy Bleu_g Bleu_b Blue_t test 3.6466 38.3426 0.33 2.50 2.61 1.57 How can I set up the code to reproduce the results of the paper?

nmder commented 3 years ago

Hi,

We will get back to you about the training. The hyperparameters might have been tinkered with before the upload. You may try the pre-trained model provided in the README—https://drive.google.com/drive/folders/1Qab9mH6n6qPrVTP4vtQ0-oGa6GYrD8Lm

Thanks!

Emrys-Hong commented 3 years ago

Hi,

The problem seems to be a random seed problem. We have updated the code using the fixed random seed, if you run again you should be able to reproduce the result.

Sorry for waiting, let us know if you have any problems. Thanks!