keras-team / keras-contrib

Keras community contributions
MIT License
1.59k stars 653 forks source link

Weird behaviour of loss_weights in model.compile #522

Closed ramonpeter closed 4 years ago

ramonpeter commented 4 years ago

I observe some weird behaviour when I use the "loss_weights" option when I have multiple output/losses in my model. For instance, currently I have code to the regularised GAN in which a gradient penalty is applied on the discriminator output. This gradient penalty usually gets a weight. Now, when I try to implement this penalty weight using the "loss_weights" argument I get completely different results than simply write down this penalty weight into the penalty loss itself.

Meaning, if I use the loss_weights option, my gradient penalty loss grows during training, which is weird and unexpected, leading to bad results. On the other hand, if I remove the loss_weights and implement the penalty weight inside of the loss, the gradient penalty stays around 0, which is what I expect, and the final results look fine.

How is this possible?