Open armarion opened 9 months ago
I am not sure reparameterization inserts are referred as noise. Without the reparameterization it will be just an encoder-decoder, not variational autoencoders. Reparameterization prevents the network from overfitting the inputs and make the backpropagation trainable with universal approximation.
This is a general question about reconstructing data after training the VAE, that is, passing the original data into the encoder and decoder...
In the code examples of these VAEs, the reconstruction is performed by passing the original data into the full network forward pass. This pass includes adding noise to the data via the reparameterization trick before the decoder step. But shouldn't we evaluate reconstruction quality without this noise? For example, should we instead just take the means from the decoder, without any introduced noise?
And related to this question, when assessing the distributional qualities of the latent space, should we be looking at the encoded latent states without this noise (again just looking at the vector of mu)?