Hi, thanks for sharing your great work. I want to train a stylegan2-ada to take input from an FFHQ image and generate an FFHQR image. The FFHQR dataset is a retouched version of FFHQ.
To my understanding, I can train a FFHQR model using the following command:
Hi, thanks for sharing your great work. I want to train a stylegan2-ada to take input from an FFHQ image and generate an FFHQR image. The FFHQR dataset is a retouched version of FFHQ.
To my understanding, I can train a FFHQR model using the following command:
However, this model will learn to generate FFHQR images from noise. The different images can be produced by using a different seed with
generate.py
So how can I use stylegan2-ada-pytorch for such img2img translation problem? Your few words can help me a lot. Thanks!!!