Closed lbrianza closed 1 year ago
The resolution is mismatched, the depth maps were created with resolution-level 1 by default then fused with resolution-level 0. Don't set sub-scene-area too high or too low. Optimize and estimation geometric iters can now be set using the command line options geometric-iters 0 and postprocess-dmaps 0.
The resolution is mismatched, the depth maps were created with resolution-level 1 by default then fused with resolution-level 0. Don't set sub-scene-area too high or too low. Optimize and estimation geometric iters can now be set using the command line options geometric-iters 0 and postprocess-dmaps 0.
Thanks for spotting the issue! I was getting crazy in trying to understand where I was doing wrong :)
I can close since the issue was solved. Thanks
Hello,
I am trying to use the scalable pipeline to reconstruct a large scene, using the instructions here (which are the same as in the MvsScalablePipeline.py script). However, I must be doing something wrong, because after splitting the scene (into 2 chunks in this specific case), the densification step only produces a sparse point cloud on each of the 2 sub-scenes, and so I cannot proceed further with the mesh reconstruction.
I start from a mvs/ folder containing the undistorted images and the scene.mvs file. This one is fine, because it is produced using the main pipeline, MvgMvsPipeline.py, which is able to reconstruct the final mesh as expected, if processing all images together without splitting the scene.
However, when I split the scene, this is what happens:
1) Generating depth maps
2) Splitting the scene
The scene seems to be split correctly into 2 sub-scenes, 1st one with 126k points, 2nd one with 158k points. All good till here.
3) Creating a Densify.ini file and adding the following 2 lines:
4) Running DensifyPointCloud on both sub-scenes:
This is where the issues occur. The two scene_dense_XXXX produced from scene_XXXX make no sense - they only contain 25 and 230 points, respectively, out of the initial 200k+ of the beginning.
If I open scene_dense.ply (produced via the "normal" pipeline, without splitting the scene) with meshlab, I see the correct pointcloud with 200k+ points. However if I open the two scene_dense_XXXX pointclouds produced with the scalable pipeline, they only contain a few sparse points.
What is possibly going on? I get the same results if I run the scalable pipeline on other scenes. I'm sure I'm doing something wrong - but where? Some wrong options in the commands?
Thanks in advance for the help!