Closed Parul-Gupta closed 7 months ago
Thanks for your attention. In order to get the inverted latent codes, there are two ways. One is to train an encoder(psp, e4e) using the seen images, and you can encode the unseen images with the encoder. However, we find that the encoder can not handle significant domain gaps between seen and unseen categories, thus we recommend another line of inversion, which is a direct optimization of the latent code. These lines of research involve i2s, i2s++, and ii2s. To enable the code for inversion, you might need to substitute the stylegan2 implemented by rosinality with the official stylegan in their code. The optimization-based method can lessen the influence of the category gap of unseen data to some extent.
Hi authors, Kudos for this great work! I was wondering how one can get the pretrained latent codes and feature extractor weights on a new dataset (other than Flowers/VGGFace/AnimalFaces) using this codebase (in the getting started section). Could you please give a direction to follow?
Thanks