Closed grahamgower closed 2 years ago
Thank you for letting me know about this! (and apologies for the delay, I missed this issue earlier) I have fixed the entropy calculation now. I'll need to do more testing to reevaluate the constant, but I don't think this regularization term is doing that much anyway.
I think I found a small problem with the regularisation term in the discriminator loss function. The cross-entropy function being used is keras' BinaryCrossentropy with
from_logits=True
. This means forcross_entopy(a, b)
,a
should be probabilities (or class labels) on[0, 1]
andb
should be logits on(-inf, inf)
. https://github.com/mathiesonlab/pg-gan/blob/b1a82ae331fb852ec486e43eb0efd5a16786237a/pg_gan.py#L288In the discriminator loss function, this cross entropy function is used appropriately for the main loss term. https://github.com/mathiesonlab/pg-gan/blob/b1a82ae331fb852ec486e43eb0efd5a16786237a/pg_gan.py#L345-L347
But then for the regularisation term, both arguments are given the same value. I guess the first argument is wrong. https://github.com/mathiesonlab/pg-gan/blob/b1a82ae331fb852ec486e43eb0efd5a16786237a/pg_gan.py#L349-L354
Maybe this still has a regularising effect? I guess the constant
0.001
would need to be empirically rediscovered if this code were changed?