eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.19k stars 570 forks source link

Do you ever retrain the stylegan network? #287

Closed Zhentao-Liu closed 1 year ago

Zhentao-Liu commented 2 years ago

pSp is a really great job for image to image translation. But there is a question that do you ever retrain the stylegan network for your applications (inversion, impainting, super-resolution)? I find that the stylegan model file you provide in your project is different from the NVIDIA official implementation. So I wander if you used the fixed stylegan generator (the same as NVIDIA provided) or retrain it.

yuval-alaluf commented 2 years ago

The StyleGAN generator we use here is the official generator that was converted from tensorflow to pytorch. We do not train our own generator.

Zhentao-Liu commented 2 years ago

Thanks for your reply! I find that I do my research on stylegan2-ada-pytorch, that's maybe different from your stylegan implementation. So you mean that you just copy the weights of the original stylegan for your decoder part without retraining it?

---Original--- From: @.> Date: Sun, Sep 18, 2022 00:02 AM To: @.>; Cc: "Zhentao @.**@.>; Subject: Re: [eladrich/pixel2style2pixel] Do you ever retrain the stylegannetwork? (Issue #287)

The StyleGAN generator we use here is the official generator that was converted from tensorflow to pytorch. We do not train our own generator.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

yuval-alaluf commented 2 years ago

Correct, we don't retrain it. We used rosinality's implementation of StyleGAN2 rather than StyleGAN2-ada-pytorch, but you can convert the models by following the instructions here.