-
The link is invalid, seems to have been deleted, and I don’t have permission for some links.
-
Hello, thank you for your open-source work, I have gained a lot from it.
When I try to train VQGAN from scratch to reconstruct plant300K data, I find that the reconstructed images are quite blurry, s…
-
Hi author, When I trained vqgan in step1 with my own dataset, the generated samples are as follows, they seem to have a lot of noise at the edges, and some generated samples cannot distinguish between…
supgy updated
3 months ago
-
### Problem Description
During the training of the VQGAN model on CT data, persistent "white dot" artifacts appear in the reconstruction results, even after extensive training. The white dots ar…
-
Hi,
Thank you for your great work and repo. I need to use pretrained VQGAN models for FFHQ and CelebA-HQ datasets, separately. You already shared faceshq-vqgan, but i specifically need the Discrimi…
-
This is the path where I currently store the model weight location. I have been unable to read vqgan.
-
Hello, why do you use self.discriminator.zero_grad() and self.vqgan.zero_grad() for training? Won't this prevent gradient descent?
-
File "/export/scratch/ra63nev/lab/discretediffusion/OmniTokenizer/omnitokenizer.py", line 108, in __init__
spatial_depth=args.spatial_depth, temporal_depth=args.temporal_depth, causal_in_temporal…
-
Hi, I really appreciate the effort you guys make toward reproducing MUSE. I was wondering in the trained VQGAN, which version of the LAION is used?
-