bsonghao / machine-learning-for-physicists

Code for "Machine Learning for Physicists 2020" lecture series
https://pad.gwdg.de/s/HJtiTE__U
0 stars 0 forks source link

Write a variational auto-encoder #2

Open bsonghao opened 1 year ago

bsonghao commented 1 year ago

There are two types of autoencoder: AutoEncoder (AE) and Variational AutoEncoder (VAE). For details, please refer to the following article: https://towardsdatascience.com/difference-between-autoencoder-ae-and-variational-autoencoder-vae-ed7be1c038f2#:~:text=The%20encoder%20in%20the%20AE,to%20be%20a%20normal%20distribution.

87b0474 and the corresponding notebook provide a visualization of the output of the hidden layer of the AE for an image reconstruction process. . However, to achieve true "unsupervised learning" we need one more step: VAE

bsonghao commented 1 year ago

In quantum chemistry, our goal is to solve the eigenvalue problem of a quantum Hamiltonian and here, the goal is to apply VAE to solve the eigenvalue problem.

bsonghao commented 1 year ago

6277630d4ef6f6c27a72fcb9f0eaa2d146017af0 test t-SNE approach on the MNIST database. where 1000 images data is randomly sample in among the 50000 train set.
image

Comparing to the PCA and AE, clearly the label are much well separated in the latent space. However, I still have some difficulties on fully understand how this algorithm works.