IshmaelBelghazi / ALI

Adversarially Learned Inference
MIT License
311 stars 80 forks source link

sign on loss function #14

Open johnwlambert opened 7 years ago

johnwlambert commented 7 years ago

Hi, in the paper pseudocode, for Loss_d and Loss_g, the gradient ascent is turned into gradient descent by making the whole loss negative (negative signs in front of every term).

However, in the code, I don't see any of those negative signs. What am I missing?

Thank you!

IshmaelBelghazi commented 7 years ago

The negative term in front of the discriminator loss is factored in the softplus.

edgarriba commented 7 years ago

@johnwlambert check the expanded formulas

formulas

edgarriba commented 7 years ago

whereas softplus(x) = log(1 + exp(x))