Open eitanrich opened 4 years ago
That's an error. But I won't change it, to be consistent with the published results. Most likely, the effect won't be significant, but I'm curious to see how the result will differ.
Maybe the behavior is related to Implicit Rank-Minimizing Autoencoder :)
Is it intentional that the D module (MappingToLatent) consists of three F.Linear layers w/o any activations (e.g. no ReLU / Leaky ReLU)?
https://github.com/podgorskiy/ALAE/blob/5d8362f3ce468ece4d59982ff531d1b8a19e792d/net.py#L894