Closed unilight closed 6 years ago
I have not looked at the training curves so far because of tensorboard was disabled due to incompatibilities, but yes, this is what you would expect. It's a true zero-sum game between P and Q, both distributions share D(x) in [0,1]. D(x_real) increase above .50 which implicitly decreases D(x_f) below .50 and then D(x_f) increase above .50 which implicitly decrease D(x_r) below .50 and so on.
I wouldn't know, but I assume that it should, Relativistism can be used in almost every settings.
Hi Alexia, What about the curve of the generator, should be around 1?
Thanks in advance.
It should stay mostly flat around some small number.
Hi, I have read your paper. It was a really interesting idea!
I've been trying to implement you paper in TensorFlow, and I wonder if my implementation was right. I'm familiar with WGAN-GP, so I tried RSGAN-GP first. I looked at the training curve of the loss of the discriminator, and found it fluctuating around 0.5. I wonder if this is a normal phenomena?
Also, I wonder if the idea of RGAN is extendable to hybrid models, ex. VAEGAN or collaborating with other MSE-like loss function?