-
-
I found `ctx.needs_input_grad[1]` is `False` during training VQ-VAE. Is this correct, and does it mean the embedding of the codebook does not update during training?
https://github.com/ritheshkumar…
zhxgj updated
2 months ago
-
Observing that there is no mature data preprocessing in this project, which directly uses raw data for training, we propose to add data preprocessing
-
I am using 4x Nvidia V100 and I am not able to get a batch size larger than 32 for the hyperparameters of this paper for training on the top codes. I have also changed the loss to discretized mixtures…
-
https://github.com/AntixK/PyTorch-VAE/blob/master/models/vq_vae.py#L216
any reason?
-
Dear Team Deepmind,
I am really grateful that you shared a vqvae_example with sonnet2. However, when running it, I currently encounter a problem of NAN vqvae loss from the beginning. The outcome is…
-
Thanks for your impressive works ! I have a few questions when I reading your paper GestureDiffuCLIP.
1. The MotionCLIP model use SMPL parameters as the motion representations, while BEAT and ZeroEGG…
-
Nice work !
I notice that you pretrain a VQ-VAE to compress the image sequence to a discrete latent space, and explore an auto-regressive decoder named Earthformer-AR.
I'm interesting in the tra…
-
https://arxiv.org/pdf/1711.00937.pdf
-
I tried VQ-VAE training with the parameters suggested in the document (However, since we are building the humanML3D dataset, we used the '--dataname kit' option.).
However, 'RuntimeError: Unable to f…