Open Luciennnnnnn opened 1 month ago
This is a typo here, and it should be $\textrm{ReLU}(1 + D (\tilde{x}_s, c, s)) + \textrm{ReLU}(1 - D(\hat{x}_s, c, s))$, where $D$ is the discriminator, $c$ is the text conditions and $s$ is the timestep conditions. The $\textrm {ReLU}$ function is equivalent to normally applied hinge loss.
yes, thank you for clarify. What I'm asking is the name/origin of this adversarial loss. So it is hinge loss, got it!
Hi, what's the motivation behind adversarial loss: ReLU(1 + x) + ReLU(1 - x), is there any reference?