-
It would be great to see how this consistency model performs on face datasets such as CelebA-HQ and FFHQ, since the paper didn't mention the face dataset.
-
Thanks a lot for your nice work!
How do i train celeba_hq.ckpt with my own dataset?I would appreciate it if you could provide me with more information on how to train the model or point me towards …
-
I would like to use torchvision.dataset.MNIST to run the code since celebA dataset takes more time to train. Please could you tell me what changes to make if I want to train this code for the MNIST da…
-
-
大佬您好,我是个小白,我个人对你的论文很感兴趣,下载了代码并跑了论文中展示的数据集。其中celebA数据集我删去了datasets代码中的instances,用的mask是上色后的mask,最后训练出来的模型,在输入mask得到生成的人脸效果挺好,但是就是有一层灰白色的噪点。请问您在训练时也遇到过同样的情况吗?
-
Hey, I am looking to run your code on CelebA dataset and I can get the code to work but it takes really long to train this (given the resources I have at my disposal). It would be great if you could s…
-
Currently all LAMA models have been trained on 256x256 crops of 512x512 images.
I would like to understand what changes should be made to train a LAMA model on a bigger image resolution - maybe 512x5…
-
@Jireh-Jam Hey. Since your model is generalizable, is it possible to fine-tune your celeba pre-trained model on a single image of a person who does not belong to the previous training set i.e celeba d…
-
Hi,
Thank you for sharing your code on github. I have a question regarding the configs.
In the config for ImageNet and CelebA, there are two lines that seem to be more relevant for the lsun dat…
-
Hi,
Utilize the pre-trained pkl file: https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-ffhq-256x256.pkl. I've attempted to transfer learning (without augmen…