-
Hello! Thank you for the clean + user friendly codebase!
I'm trying to finetune the VQ-VAE tokenizer and noticed some keys might be missing from the pretrained checkpoint listed on huggingface: `"o…
-
Hello author, I would like to ask you what the three files in the open-source VQVAE code represent? ([2022-xx-xx/xx-xx])
-
I haven't found a pre-trained model for VQ-VAE yet. Can you tell me the path to this pre-trained model?
-
the paper emphasizes joint training of sparse encoder and dense VQ-VAE to optimize the codebook and improve generalization. But in this code joint training has not been done right? is there any reason…
-
## Enhancement
Thanks for this wonderful work.
However, is there any guidance on training the VQ-VAE in MS-ILLM ?
-
**Is your feature request related to a problem? Please describe.**
Currently, the timm library lacks implementations for Variational Autoencoder (VAE) and Vector Quantized VAE (VQ-VAE) models. Users …
-
Hi,
I'm using your implementation to generate MRIs. I have trained a VQ-VAE to reconstruct 3D MRIs, but I am unsure about which vectors to use for training the PixelCNN for sampling.
I attempted…
-
Hi,
I am currently studying the VideoGPT and have some doubts on VQ-VAE losses.
Where is VQ loss?
- In the original [paper](https://arxiv.org/pdf/1711.00937.pdf), there're 3 loss, a reconstructi…
-
Great work. Thanks for sharing the code.
Could you please share the log for training the VQ-VAE for both KIT-ML dataset and HumanML3D (t2m) dataset?
Thanks
-
Why does the Latent Diffusion Model use **variational autoencoders (VAE)** or similar generative models like **VQ-GAN/VAE** for compression instead of using **AutoEncoder (AE)?** If AE can be consider…