Closed falloncandra closed 3 years ago
To train the boundaries for InterFaceGAN, we actually generated random images from the latent space by sampling w
vectors of size (1, 512)
rather than using the inverted latents of size (1, 18, 512)
. For example, we randomly generated 500,000
images for learning the age boundary.
Once we had these latents and the attribute scores (e.g., age) of the corresponding images we trained the boundaries direction (which are of size 512
) using their official implementation.
Thank you very much for your clear answer!
can you help me?Is your latent space by sampling w vectors of size (1, 512)random?I use np.random to get range(-1,1) latent .but it will get a lot of images that are not human faces.Do you generate latent through boundaries of InterFaceGAN?
Hi,
Can we just take [0,0,:] in [1,18,512] in the encodings produced by e4e and train interfaceGAN boundary? I know the labels corresponding to images.
Thanks for any help
Hi thanks for your work!
I would like to ask how did you get the interfacegan_direction of size (1, 512) that is used in the colab notebook.
When we invert the image to the latent space using your code, the resulting latent code is of size (1, 18, 512). However, the method train_boundary() in the InterfaceGAN github receives input latent code of size (1, 512). What did you do to preprocess the latent code from size (1, 18, 512) to (1, 512)?
Thank you very much for your help!