-
Thanks for your awesome works, I have a question about GAN inversion.
I used the psp to do GAN inversion in anime domain(512x512 300k images), and used pre-trained anime StyleGAN2(512x512 ).
![image…
-
I have trained stylegan2 with train.py with my own dataset, and training works well look like, the middle image that model generate is very real already.
but when i use modified projector.py to do…
-
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See er…
-
### News
- Conferences
- NAACL 2022 모두 수고하셨습니다
- ICML 2022: 7.17 ~ 23, 미국 볼티모어
- CLOVA 스케쥴: https://naver-career.gitbook.io/en/teams/clova-cic/events/clova-and-ai-lab-icml-2022
- COLIN…
-
### News
- Conferences
- ICDM 2022 (11.28 - 12. 1, Orlando, US) Notification: 모두 축하드립니다.
- EMNLP 2022: Rebuttal 1주 연장 (9.4 AoE 까지)
- AI Rush Conference 2022 (9.7 오후 1시부터)
- [디지털플랫폼정부위원회 출범]…
-
Hi Yujun,
In the paper you claimed that it must use GAN inversion method to map real images to latent codes, and StyleGAN inversion methods are much better, are there documents introducing how to d…
-
I only have 2 2080 GPUs, but I want to train a generator with a resolution of 1028*1028. How can I adjust the complex loss and parameters so that the model can be trained?
Since my generator is alr…
-
hello! The GitHub provides us some boundaries about the W space in StyleGAN. But I found that the code contains two configurations which are W space and W+ space. So I wonder if all the boundaries lab…
-
thanks in advance.
-
Hi! Congratulations to your great job.
Could you provide a script for GAN inversion?
I wonder if the model could reconstruct a 3D face from only one input image.