Closed TheFloHub closed 3 years ago
Hi @TheFloHub , I made the same experience that in my own code the latent vector optimisation with fixed decoder weights does not converge to the optimal values easily. When I initialise the latent code close to the values of another code it works, but that is no real solution, as you pointed out. Did you figure something out (since you closed this issue)?
Hello friends of deep learning,
I have a rather general question to the paper, not specifically the code. Before inference you are initializing the latent code with specific random values respecting the latent space (I think so, that the init latent vector has about length = 1 right?). How do you make sure it falls into the right optimum or why does it work so well? I'm asking because in my own implementation everything works but the inference. I have to specifically initialize the latent vector really close to the already known and perfect latent vector so that the Adam optimizer finds the right optimimum. I played around a lot with the paramters but it doesn't seem to help. One solution is to sample the latent space and run inference for a lot of latent vectors to find the right init but this seems very brute force. For me it's a mistery because there is no guaranty that the SDF network automatically creates a nice convex landspace in the latent space.
Greetings, Flo