tolstikhin / wae

Wasserstein Auto-Encoders
BSD 3-Clause "New" or "Revised" License
505 stars 90 forks source link

Latent discriminator for WAE-MMD ? #3

Closed ahmed-fau closed 6 years ago

ahmed-fau commented 6 years ago

Hi, Thanks for this interesting paper and implementation :)

My question: why do we still need discriminator network for the WAE-MMD approach ? As stated in algorithm 2 of the paper ?

Best REgards

tolstikhin commented 6 years ago

Dear Ahmed,

that is a typo ) You don't need the discriminator in Algo 2.

Ilya

ahmed-fau commented 6 years ago

Cool, thanks !

Just one more "conceptual" question: what I have understood from the paper description about WAE is that it looks like a simple auto-encoder network with traditional reconstruction loss but with a constraint that the latent code Z should follow a distribution of tractable prior distribution (e.g. Gaussian). Hence, the generative modeling would be by direct sampling from that Gaussian in order to generate the sample from data distribution .... is this correct or I have misunderstood lots of the sense ?

Best

tolstikhin commented 6 years ago

Yes, indeed, you got it correct. It's just a regularized auto-encoder, where the encoded data distribution should match the prior. Of course, it works with any reconstruction cost function c(X,Y) of your choice, where X is an image and Y is it's reconstruction. In the paper we used L2 squared loss so that we could "fairly" compare to VAE (which, when combined with the Gaussian decoders, lead exactly to that loss).

Let me know if you have any further questions!

Ilya