agrimgupta92 / sgan

Code for "Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks", Gupta et al, CVPR 2018
MIT License
813 stars 261 forks source link

D_data_loss and G_discriminator_loss don't change #56

Open agoodge opened 5 years ago

agoodge commented 5 years ago

As in the title, the adversarial losses don't change at all from 1.398 and 0.693 resepectively after roughly epoch 2 until end. Though G_l2_loss does change. Any ideas whats wrong? I've tried changing hyperparameters to those given in the pretrained models as suggested in a previous thread.

PhyllisH commented 5 years ago

I met this problem as well. Have u figured out what is wrong?

agoodge commented 5 years ago

Have not :(

PhyllisH commented 5 years ago

You could change the parameter 'l2_loss_weight'. Then the loss would change.

agoodge commented 5 years ago

You mean reduce the weight of l2_loss? that would encourage the adversarial loss to decrease?

PhyllisH commented 5 years ago

I mean that you could change the default value of 'args.l2_loss_weight'.

PhyllisH commented 5 years ago

However, the D_data_loss and G_discriminator_loss do not change after several epochs from 1.386 and 0.693 while other losses keep changing.

cuihenggang commented 5 years ago

Same question here. My loss doesn't change.

JackFram commented 4 years ago

I found out this could be due to the activation function of discriminator is ReLU, and the weight initialization would lead the output be 0 at the beginning, and since ReLU output 0 for all negative value, so gradient is 0 as well. Simply change discriminator's real_classifier's activation function to LeakyReLU could help.

ZhoubinXM commented 4 years ago

l2_loss_weight

change to what

Yuliang-Zou commented 4 years ago

Even if I replace ReLU with LeakyReLU, the losses do not change basically.

ZhoubinXM commented 4 years ago

Even if I replace ReLU with LeakyReLU, the losses do not change basically.

U can change the L2_loos_weight. It could be help.

zpp960807 commented 4 years ago

I have met the same problem,even if I set the l2_liss_weight to 1, the adversarial losses didn't change yet and it was still 1.386 and 0.693.

buzhanpeng commented 1 year ago

If running on the Windows operating system, all parameters should be based on the run_ traj. sh. You cannot run the train.py program directly

neugzy commented 5 months ago

I found out this could be due to the activation function of discriminator is ReLU, and the weight initialization would lead the output be 0 at the beginning, and since ReLU output 0 for all negative value, so gradient is 0 as well. Simply change discriminator's real_classifier's activation function to LeakyReLU could help.

Yes, using LeakyReLU could help change the D_loss and G_loss. But it seems that the evaluate results of relu and leakyRelu make no difference. Both of them can get reasonable results.

sjx3906 commented 3 months ago

I found out this could be due to the activation function of discriminator is ReLU, and the weight initialization would lead the output be 0 at the beginning, and since ReLU output 0 for all negative value, so gradient is 0 as well. Simply change discriminator's real_classifier's activation function to LeakyReLU could help.

Yes, using LeakyReLU could help change the D_loss and G_loss. But it seems that the evaluate results of relu and leakyRelu make no difference. Both of them can get reasonable results.

I changed relu to leakyrelu both in the generator and the discriminator, the loss did chenged, but it's very strange, Is this normal? image