-
I ran
`python /people/kimd999/script/python/cryoEM/vq-vae-2-pytorch/train_vqvae.py /people/kimd999/MARScryo/dn/data/part/PDX_label_correct/exp_coexp/input --size 256`
It seems to generate sample i…
kimdn updated
4 years ago
-
Hello! Thank you for the clean + user friendly codebase!
I'm trying to finetune the VQ-VAE tokenizer and noticed some keys might be missing from the pretrained checkpoint listed on huggingface: `"o…
-
Hello,
I wanted to confirm the steps for training a VQ-VAE on radiology data. Thank you for working on such an interesting and important application of VQ-VAE. Our research group is particularly i…
-
Hello, first of all thanks for this interesting implementation of VQ-VAE 2 paper.
I can train this network on a dataset of mine, however reconstructed images are a little bit blurry. Quality is goo…
-
Hi, Thank you for great work.
I have question about the Eq.1 of supp.
$\mathcal L_\text{VQ-VAE}=-\log p(X|\mathbf Z) + \|\| \text{sg}[\hat{\mathbf Z}]-\mathbf Z\|\|^2_2+\|\| \hat{\mathbf Z} - \tex…
-
## Abstract
- propose Vector Quantised Variational AutoEncoder (VQ-VAE)
- generative model that learns discrete representations
- prior is learnt rather than static
- solves the issue of "po…
-
Thanks for your impressive works ! I have a few questions when I reading your paper GestureDiffuCLIP.
1. The MotionCLIP model use SMPL parameters as the motion representations, while BEAT and ZeroEGG…
-
## リンク
https://arxiv.org/pdf/2002.03788.pdf
## どんなもの?
- VQ-VAE TTSを提案
## 先行研究と比べてどこがすごい?
- 時系列間の潜在変数の変化をモデリングすることにより高品質な音声合成を実現
## 技術と手法のキモはどこ?
- 以下の2段階学習で構成されている
1. VQ-VAEで韻律を離散的な潜在変数とし…
-
Hi,
I recently read [this](https://ml.berkeley.edu/blog/posts/clip-art/) blog and was fascinated by the potential of these generative models. I am hoping to learn the fundamentals, reimplement models…
-
## Enhancement
Thanks for this wonderful work.
However, is there any guidance on training the VQ-VAE in MS-ILLM ?