Closed shenhaiyoualn closed 1 year ago
Perhaps I should ask how to obtain an initial inversion into the W latent space.
The initial inversion into the W latent space was done by training the WEncoder using the e4e codebase. We simply change the encoder to use our WEncoder class and change the number of latents from 18 to 1. I believe everything else was kept as default.
I train a e4e model on my own dataset, because i want to replace the [Faces W-Encoder] with it. But i find the model does not match, I want to know how you got the [Faces W-Encoder]? Thanks!!