CR-Gjx / LeakGAN

The codes of paper "Long Text Generation via Adversarial Training with Leaked Information" on AAAI 2018. Text generation using GAN and Hierarchical Reinforcement Learning.
https://arxiv.org/abs/1709.08624
576 stars 180 forks source link

In /Image CoCo ,training stop based on what? #10

Closed shaomai00 closed 6 years ago

shaomai00 commented 6 years ago

I tried to train leakGan in /Image CoCo,while I saw both worker_loss and manager_loss are unstable. total_batch: 440 -0.0632318 0.985172 ... total_batch: 450 -4.78807e-05 3.20244 total_batch: 451 -0.0882713 0.516656 ... total_batch: 455 -0.0578834 1.14354 I'm wondering how to know when to stop training? During the training ,dose g_loss or w_loss meaningful?

And why g_loss<0? thank you very much.

CR-Gjx commented 6 years ago

In fact, g_loss and w_loss mean nothing. During the training process, the model will save some generation examples in the save folder, you can convert them to the real word by convert.py and justify if you should stop training.

shaomai00 commented 6 years ago

thank you for answering. I think I understand now.