chansey0529 / LSO

The official Pytorch implementation of our paper Where is My Spot? Few-shot Image Generation via Latent Subspace Optimization, CVPR 2023.
11 stars 0 forks source link

Re: Pretrained inverted latent codes/weights for a new dataset #4

Closed Parul-Gupta closed 5 months ago

Parul-Gupta commented 6 months ago

Hi authors, Kudos for this great work! I was wondering how one can get the pretrained latent codes and feature extractor weights on a new dataset (other than Flowers/VGGFace/AnimalFaces) using this codebase (in the getting started section). Could you please give a direction to follow?

Thanks

chansey0529 commented 6 months ago

Thanks for your attention. In order to get the inverted latent codes, there are two ways. One is to train an encoder(psp, e4e) using the seen images, and you can encode the unseen images with the encoder. However, we find that the encoder can not handle significant domain gaps between seen and unseen categories, thus we recommend another line of inversion, which is a direct optimization of the latent code. These lines of research involve i2s, i2s++, and ii2s. To enable the code for inversion, you might need to substitute the stylegan2 implemented by rosinality with the official stylegan in their code. The optimization-based method can lessen the influence of the category gap of unseen data to some extent.