eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.19k stars 570 forks source link

About getting more experimental result #160

Closed yyuu-6 closed 3 years ago

yyuu-6 commented 3 years ago

Dear Author: Thank you very much for your work. I've implemented the StyleGAN inversion using your code,and get 13 inversion images.Now I want to get more inversion images, would you please tell me whether the following process is correct? 1.Download the FFHQ dataset from https://github.com/NVlabs/ffhq-dataset and store it to /path/to/ffhq/images256x256 2.Replace the source code "EXPERIMENT_DATA_ARGS ={"image_path": "notebooks/images/input_img.jpg" } " to new path 3.Use "Visualize Input"and "run_alignment(image_path)" to resize image from 10241024 to 256256 4.Replace the source code "image_paths" to new path.There I have a question,I can't understand what's the difference between image_path and image_paths.When I implemented the StyleGAN inversion ,the "notebooks/images/input_img.jpg" don't appears in the run result. 5.Then,I can implemented the StyleGAN inversion and get more inversion images. Thank you in advance for your answers!

FrazierLei commented 3 years ago

If you just want to run inference on your own images, you don't need to download the FFHQ dataset. You just need to download the pretrained models and place them in the corresponding folders. And then replace those 13 downloaded images with your own images.