SivilTaram / Persona-Dialogue-Generation

The code of ACL 2020 paper "You Impress Me: Dialogue Generation via Mutual Persona Perception"
MIT License
309 stars 46 forks source link

transmitter training doesn't stop despite reaching num_train_epochs #11

Closed parthpatwa closed 4 years ago

parthpatwa commented 4 years ago

in train_transmitter.py: num_train_epochs = 4 Despite that the model keeps training after 4 epochs. pfa transmitter

SivilTaram commented 4 years ago

@parthpatwa It does not occur for me in my experiments. However, you could just shut down it and find the latest checkpoint as Transmitter model.

parthpatwa commented 4 years ago

@SivilTaram I think I found why is it happening: 1) in train_transmitter.py line 74: num_train_epochs = 4 2) however in parlai/scripts/train_model.py line 237: self.max_num_epochs = opt['num_epochs'] if opt['num_epochs'] > 0 else float('inf')

so parlai expects the key to be 'num_epochs' not 'num_train_epochs' and as it does not find the key 'num_epochs' in opt, max_num_epochs is set to inf and the training does not stop.

is this correct reason? If yes, should I go ahead and open a Pull Request?

SivilTaram commented 4 years ago

@parthpatwa Yeah, you catch it. I will be very happy if you could open a PR 👍

parthpatwa commented 4 years ago

@SivilTaram done, check pr14

SivilTaram commented 4 years ago

@parthpatwa Thanks :)