Closed fefespn closed 1 year ago
Hi @fefespn ,
We only render the images for training our 3D generative model, while during inference, we do not need any input views to generate 3D models, we only need to randomly draw latent codes from a normal distribution.
Thanks ! My fault.
Can't we start from an image (get the latent code of it in some way) and get the 3D result ?
no, this doesn't do that
Ok rhanks a lot for your hard work and long days !!
In the paper you mentioned that rendered training dataset is 24 random views for Cars&Char and 100 for Motorbike. How much 2d images of the object you need for the inference ? the same amount ? Did you tried to inference with less, maybe just 2-3 views ?