yuval-alaluf / stylegan3-editing

Official Implementation of "Third Time's the Charm? Image and Video Editing with StyleGAN3" (AIM ECCVW 2022) https://arxiv.org/abs/2201.13433
https://yuval-alaluf.github.io/stylegan3-editing/
MIT License
654 stars 73 forks source link

About training my own boundary file #50

Closed shartoo closed 9 months ago

shartoo commented 11 months ago

Thank you for your awesome work and great effort! But i'm still wondering why StyleGAN3 cannt use something like direction in StyleGAN1 ?

The original FFHQ dataset contains 40 attributes for every images, by rebuilding those images back to latent code, you'll get all 40 precise arttributes for every image, which seems more efficient and precise than random generate 500000 image and predict its attributes.

In StyleGAN1 encoder learn latent it provide a method to directly extract attribute direction from latent code, this make training a general multi-class classifier possible. Can this be achieved in StyleGAN3Editing?