-
Great work. Thanks for sharing the code.
Could you please share the log for training the VQ-VAE for both KIT-ML dataset and HumanML3D (t2m) dataset?
Thanks
-
For training the VQ-VAE component of a latent diffusion model a la `CompVis/ldm-celebahq-256` (which uses `diffusers.VQModel`), is there a combined loss term for each of the losses as described by the…
asy51 updated
2 months ago
-
Hello,
I wanted to confirm the steps for training a VQ-VAE on radiology data. Thank you for working on such an interesting and important application of VQ-VAE. Our research group is particularly i…
-
I am very interested in your excellent work Foldseek. I want to re-train it on my own dataset based on your GitHub repository “foldseek-analysis”. I checked the code and found that it saves three fil…
-
@rosinality , thanks for sharing the code. It really helps me a lots.
I am confused whether it is a vanila vqvae. I found no 'top' ,'bottom' hierarchy in 'train_vqvae.py'. would please share how did …
-
Hi @clementchadebec.
Thank you for creating this repository.
I am attempting to train a VQ-VAE model, but I couldn't find an `embedding_dim` argument in either the `VQVAEconfig` or `VQVAE` classes…
-
In VQ-Font/model/VQ-VAE.ipynb.
for i in xrange(num_training_updates):
data = next(iter(train_loader))
train_data_variance = torch.var(data)
# print(train_data_variance)
# show(m…
-
After training prior with PixelCNN, during inference probabilities of codebook indices are calculated. There's a distribution sampler used to pick/sample a index instead of choosing the highest probab…
lb-97 updated
6 months ago
-
https://fenghz.github.io/vector-quantization-based-generative-model/
-
Hello, I'm wondering which vq-vae model you are using. Is it vq-vae-1? or 2
Thanks in advance