Hello,thank you for sharing. I'm a complete novice but I read your paper carefully. And there are some questions I cannot understand and unsuccessfully find in the paper :
no1. The paper mentions:“training our model from unposed 2D images alone” ,but the generator does need "camera pose" as a input. So I'm confused What are "unposed images"? What does' unpose 'mean?
no2. The generator needs two latent codes(za,zs), so how to get them? Are they part of parameter of netwok which need to training to optimize?
no.3The paper mentions:“approach allows to modify shape and appearance of the generated objects”,but I still cannot understand "how to modify" after reading,So how we to control the latent codes of za,zs?
Thats all , looking forward to your or someone else's reply.Thanks!
@WinterJack002 I've got exactly the same questions after reading GIRAFFE and GRAF papers. Did you find answesr? I couldn't do this searching over the internet for a few days :(
Hello,thank you for sharing. I'm a complete novice but I read your paper carefully. And there are some questions I cannot understand and unsuccessfully find in the paper :
no1. The paper mentions:“training our model from unposed 2D images alone” ,but the generator does need "camera pose" as a input. So I'm confused What are "unposed images"? What does' unpose 'mean?
no2. The generator needs two latent codes(za,zs), so how to get them? Are they part of parameter of netwok which need to training to optimize?
no.3The paper mentions:“approach allows to modify shape and appearance of the generated objects”,but I still cannot understand "how to modify" after reading,So how we to control the latent codes of za,zs?
Thats all , looking forward to your or someone else's reply.Thanks!