Thank you for your awesome work and great effort! But i'm still wondering why StyleGAN3 cannt use something like direction in StyleGAN1 ?
The original FFHQ dataset contains 40 attributes for every images, by rebuilding those images back to latent code, you'll get all 40 precise arttributes for every image, which seems more efficient and precise than random generate 500000 image and predict its attributes.
In StyleGAN1 encoder learn latent it provide a method to directly extract attribute direction from latent code, this make training a general multi-class classifier possible. Can this be achieved in StyleGAN3Editing?
Thank you for your awesome work and great effort! But i'm still wondering why StyleGAN3 cannt use something like direction in StyleGAN1 ?
The original FFHQ dataset contains 40 attributes for every images, by rebuilding those images back to latent code, you'll get all 40 precise arttributes for every image, which seems more efficient and precise than random generate 500000 image and predict its attributes.
In StyleGAN1 encoder learn latent it provide a method to directly extract attribute direction from latent code, this make training a general multi-class classifier possible. Can this be achieved in StyleGAN3Editing?