1) Using a trained NeRF model, we can extract the 3D mesh of an image using marching cubes (https://github.com/bmild/nerf/blob/master/extract_mesh.ipynb).
Can we do the same using the trained Giraffe model? If so, could you please provide some guidance on how the 3D mesh can be extracted for a given image after the model is trained?
2) Also, your paper indicates that we can control the shape and appearance in the latent space without supervision because the feature space is disentangled. Does the code you have released support this? I have used the StyleGAN2 projector for controlled image generation. But I am very interested in knowing if Giraffe can be used for editing the 3D shape. For e.g. after training a Giraffe model on chairs, can we input an image of a chair with arms and transform the latent space to reconstruct the 3D mesh/image of the same chair without arms?
In theory you can do this, but we did not investigate this. Please note that overfitting a single NeRF to multi-view images is a easier task than training these representations from unposed image collections, so presumably the quality of the inferred 3D geometry will be lower
Our code supports generating samples where you only change the appearance code or where you only change the shape code; please see the demo, video, etc. for this. I think what you mean is embedding your own image in the latent space of the model? If you mean this, then no, we did not investigate this, but you can always to this for GANs - you need to optimize the latent code such that the reconstruction error wrt. your input image is reduced. We do not provide code for this.
Hi,
1) Using a trained NeRF model, we can extract the 3D mesh of an image using marching cubes (https://github.com/bmild/nerf/blob/master/extract_mesh.ipynb). Can we do the same using the trained Giraffe model? If so, could you please provide some guidance on how the 3D mesh can be extracted for a given image after the model is trained?
2) Also, your paper indicates that we can control the shape and appearance in the latent space without supervision because the feature space is disentangled. Does the code you have released support this? I have used the StyleGAN2 projector for controlled image generation. But I am very interested in knowing if Giraffe can be used for editing the 3D shape. For e.g. after training a Giraffe model on chairs, can we input an image of a chair with arms and transform the latent space to reconstruct the 3D mesh/image of the same chair without arms?
Thank you very much.