Open FlonneEx opened 8 months ago
same question
+1
+1
+1
+1
+1
+1
From what I understand, the model generates multi-view images from different angles but not the mesh itself (obj, ply files etc.)
This is currently an "unsolved" problem. You can however, theoretically use the output images and feed it through 3D reconstruction software such as Instant NGP, Meshroom or other models that generate mesh from multiple images.
From what I understand, the model generates multi-view images from different angles but not the mesh itself (obj, ply files etc.)
This is currently an "unsolved" problem. You can however, theoretically use the output images and feed it through 3D reconstruction software such as Instant NGP, Meshroom or other models that generate mesh from multiple images.
@ivantan-ys However, they introduce a Coarse-to-Fine Training method to generate 3D mesh (compared to original NeRF approach), which is included in the 4th part of their paper, while is completely missing in this repository.
@ivantan-ys However, they introduce a Coarse-to-Fine Training method to generate 3D mesh (compared to original NeRF approach), which is included in the 4th part of their paper, while is completely missing in this repository. +1, I also want to know the code about their "Coarse-to-Fine Training method" to generate 3D mesh
Any updates?👀
The paper makes several references to meshes, rather than just a sequence of images, and the website describing the tech has viewable examples which appear to be .obj files. However, I don't see any option to export an .obj, or any examples, and it isn't obvious how to do so while looking at the scripts.
Am I missing something here? How can I obtain textured 3D objects like the examples?