Open yuancaimaiyi opened 11 months ago
The output will be automatically saved under the exp
folder, with a name like exp/neuralangelo-colmap_sparse-wmask-*/@20231207-111559/save/it20000-mc512.obj
.
If you would like to export a high resolution mesh additionally, you could run by
python export.py --exp_dir exp/neuralangelo-colmap_sparse-wmask-*/@20231207-111559 --res 1024
The results will be automatically saved under results
folder
@hugoycj
update:
In addition, I still have two questions that I need your help with。
Question 1: Coordinate System Issue
(1) Regarding the coordinate system issue: As you can see in the diagram below, under the same pose, one is the result of colmap mesh, and the other is the result of Instant-angelo. In theory, they should coincide, but there seems to be a mismatch in coordinates and scale. Just to clarify (I consider myself a beginner in NERF because I have been focusing on traditional algorithms for a long time), I would like to know how I can align the results of the two. This is the first question.
colmap mesh :
instant-angelo mesh :
Question 2: Handling Unbounded or Forward Motion Scenes(outdoor)
(2) Can you handle unbounded scenes or, in other words, forward motion scenes? As you can see in (1), it is an object-centric scene, and the performance is decent. However, when I scanned an unbounded scene with my phone, the sparse model looks like the one below, and it seems that Instant-angelo fails to reconstruct the scene. I would like to ask, for this type of case (2), which parameters need to be adjusted to handle it? Or is Instant-angelo specifically designed for object-centric scenes? unbound sparse model: instant-angelo mesh :
Apologies for the delayed response. Regarding the first question, we normalize the pose to a canonical space in order to ensure that the entire reconstruction area is within a coordinate range of (-1, 1). As a result, there should be a corresponding conversion from canonical space back to the original coordinates. I will be adding an option for this in the export scripts tomorrow.
As for the second question, the current pipeline is specifically designed for outside-in (similar to mipnerf360) reconstruction and has not been tested for inside-out (such as scannet) scenarios. Therefore, we cannot guarantee successful reconstruction for indoor scenes. It's possible that a different pipeline tailored for indoor scene reconstruction, such as monosdf and nicer-slam, may yield better results.
@hugoycj Hi,awesome work,but there is no result output from my training here. What could be the problem? Thank you.