agrimgupta92 / sgan

Code for "Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks", Gupta et al, CVPR 2018
MIT License
819 stars 260 forks source link

Noise added to the labels and predictions in cross-entropy loss #3

Closed tessavdheiden closed 6 years ago

tessavdheiden commented 6 years ago

Hi Agrim!

I saw that you added noise (uniform) to the targets and labels. Can you explain why you do it and how the ranges for these uniform distributions are chosen?

From losses.py:

    y_real = torch.ones_like(scores_real) * random.uniform(0.7, 1.2)
    y_fake = torch.zeros_like(scores_fake) * random.uniform(0, 0.3)
agrimgupta92 commented 6 years ago

This is done for label smoothening as suggested here: https://github.com/soumith/ganhacks#6-use-soft-and-noisy-labels

tessavdheiden commented 6 years ago

Hi Agrim!

Thanks. Unfortunately for WGAN* (in order to get a better convergence of the G gradient), I cannot do this trick (the WGAN loss function is computed over the scores only, not the labels).

Have you looked into other models (for instance VGAN)? It would be great to know, so I can learn from your experience :).

*Changed the loss functions, added gradient clipping of the D, changed optimizer to RMSprop.