Open netrome opened 6 years ago
Working on AEGAN, I want to test sampling from a uniform distribution and add a translated strong ReLU loss on the autoencoder to constrain the latent code to this space. I should try training the autoencoder with this loss first.
Normal distributions are nicer than uniform distributions. Idea: Penalize autoencoder points with (some normalized) inverse probability density.
Another great thing about this AEGAN: The reconstruction error can be used as a measure of convergence.
The convergence time for this algorithm seems high. If the algorithms fail to encode all relevant information (especially sharp edges), the adversarial and GAN-loss are at risk of counteracting eachother
Sort of adversarial autoencoder. Train a normal autoencoder (with constraints on the latent space to prevent drifting) simultaneously with a normal GAN (or perhaps an LSGAN?).