mkusner / grammarVAE

Code for the "Grammar Variational Autoencoder" https://arxiv.org/abs/1703.01925
269 stars 78 forks source link

Question on Latent Space Dimension #4

Closed chao1224 closed 7 years ago

chao1224 commented 7 years ago

@mkusner Hi Kusner, Have some quetions on the dimension of latent space.

As I checked it in the pretraind folder, and training scripts in GVAE and CVAE, in your setting, you are using 56 as your latent dimension for both GVAE and CVAE. And in the paper appendix, you also plot the 2-dimensional space.

So two questions here:

  1. It seems like you are showing VAE on 2- and 56-dimensional space. Have you tried to optimize over other latent space dimension? Or you implemented that during Bayesian optimization?
  2. In Gomez's paper and code, he tried both 56 and 292 as latent space dimension. Have you also tried to run that results? Or you skip that because 56 is better than 292?

Please correct me if I miss something.

mkusner commented 7 years ago

Hey Shengchao!

  1. We didn't try optimizing over the latent space. We used 56 dimensions for all experiments except the logP visualization where we used 2 dimensions. Depending on the problem you were interested in solving one idea for selecting the latent dimension is to do cross validation to find the smallest latent dimension that gives you the best reconstruction error on a holdout validation set.
  2. On the machine I was using I ran into memory issues using a 292 dimensional latent space, so I wasn't able to train the GVAE. But I'd be curious to know how it compares to the CVAE.

On Tue, Aug 1, 2017 at 8:29 PM, Shengchao Liu notifications@github.com wrote:

@mkusner https://github.com/mkusner Hi Kusner, Have some quetions on the dimension of latent space.

As I checked it in the pretraind folder, and training scripts in GVAE https://github.com/mkusner/grammarVAE/blob/master/train_zinc.py#L21 and CVAE https://github.com/mkusner/grammarVAE/blob/master/train_zinc_str.py#L18, in your setting, you are using 56 as your latent dimension for both GVAE and CVAE. And in the paper appendix, you also plot the 2-dimensional space.

So two questions here:

  1. It seems like you are showing VAE on 2- and 56-dimensional space. Have you tried to optimize over other latent space dimension? Or you implemented that during Bayesian optimization?
  2. In Gomez's paper and code, he tried both 56 and 292 as latent space dimension. Have you also tried to run that results? Or you skip that because 56 is better than 292?

Please correct me if I miss something.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mkusner/grammarVAE/issues/4, or mute the thread https://github.com/notifications/unsubscribe-auth/AIJS0Wue_Q3_voDrnbpXX_lsoshw9y44ks5sT3x-gaJpZM4OqLzk .

chao1224 commented 7 years ago

Thank you very much. Appreciate your help.