cdcseacave / openMVS

open Multi-View Stereo reconstruction library
http://cdcseacave.github.io
GNU Affero General Public License v3.0
3.31k stars 907 forks source link

openmvg looks good but openmvs cannot work, known pose #990

Open zhaozhongch opened 1 year ago

zhaozhongch commented 1 year ago

Hi, I want to try to make the openmvg + openmvs pipeline with the known pose. The openMVG can work on the dataset with known camera intrinsic and extrinsic. The openmvg provide this function on some specific dataset, for example, the ETH3D dataset . I download the courtyard dataset and run it with openmvg with the following command

...
print ("1. Intrinsics analysis")
pIntrisics = subprocess.Popen( [os.path.join(OPENMVG_SFM_BIN, "openMVG_main_SfMInit_ImageListingFromKnownPoses"),  "-i", input_dir, "-o", matches_dir, "-g", gt, "-t", "4"] )
pIntrisics.wait()

print ("2. Compute features")
pFeatures = subprocess.Popen( [os.path.join(OPENMVG_SFM_BIN, "openMVG_main_ComputeFeatures"),  "-i", matches_dir+"/sfm_data.json", "-o", matches_dir, "-m", "SIFT", "-f" , "1", "-p", "HIGH"] )
pFeatures.wait()

print ("2. Compute matches")
pMatches = subprocess.Popen( [os.path.join(OPENMVG_SFM_BIN, "openMVG_main_ComputeMatches"),  "-i", matches_dir+"/sfm_data.json", "-o", matches_dir+"/matches.putative.bin", "-f", "1", "-n", "ANNL2"] )
pMatches.wait()

print ("2. Filter matches" )
pFiltering = subprocess.Popen( [os.path.join(OPENMVG_SFM_BIN, "openMVG_main_GeometricFilter"), "-i", matches_dir+"/sfm_data.json", "-m", matches_dir+"/matches.putative.bin" , "-g" , "f" , "-o" , matches_dir+"/matches.f.bin" ] )
pFiltering.wait()

##Note I FIX the intrinsic and extrinsic here by using 
## "-f", "NONE", "-e", "NONE"
reconstruction_dir = os.path.join(output_dir,"reconstruction_sequential")
print ("3. Do Incremental/Sequential reconstruction") #set manually the initial pair to avoid the prompt question
pRecons = subprocess.Popen( [os.path.join(OPENMVG_SFM_BIN, "openMVG_main_SfM"), "--sfm_engine", "INCREMENTAL", "--input_file", matches_dir+"/sfm_data.json", "--match_dir", matches_dir, "--output_dir", reconstruction_dir, "-f", "NONE", "-e", "NONE"] )
pRecons.wait()

print ("5. Colorize Structure")
pRecons = subprocess.Popen( [os.path.join(OPENMVG_SFM_BIN, "openMVG_main_ComputeSfM_DataColor"),  "-i", reconstruction_dir+"/sfm_data.bin", "-o", os.path.join(reconstruction_dir,"colorized.ply")] )
pRecons.wait()

print ("4. Structure from Known Poses (robust triangulation)")
pRecons = subprocess.Popen( [os.path.join(OPENMVG_SFM_BIN, "openMVG_main_ComputeStructureFromKnownPoses"),  "-i", reconstruction_dir+"/sfm_data.bin", "-m", matches_dir, "-o", os.path.join(reconstruction_dir,"robust.ply")] )
pRecons.wait()
...

Note I fix the camera intrinsic and extrinsic at the openMVG_main_SfM with "-f", "NONE", "-e", "NONE"options. Then I check the colorized.ply generated from openmvg and it looks pretty good considering it is only sparse point cloud point_cloud_from_openmvg_courtyard We can see the basic structure looks like a building in the input image dataset DSC_0313-min

Then I use the openMVG_main_openMVG2openMVS to convert the openmvg sfm_data.bin to openmvs scene.mvs Then I run DensifyPointCloud to density the pointcloud, However, nothing outputs

./DensifyPointCloud scene.mvs

However, nothing generates. I found only one depth map is generated for the 38 images. It should be 38 depth maps. The following is the log

14:51:11 [App     ] Command line: DensifyPointCloud /home/zhaozhong/dataset/eth3d/courtyard/openmvs/scene.mvs
14:51:11 [App     ] Camera model loaded: platform 0; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 1; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 2; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 3; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 4; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 5; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 6; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 7; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 8; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 9; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 10; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 11; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 12; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 13; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 14; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 15; camera  0; f 0.550x0.550; poses 1
14:51:11 [App     ] Camera model loaded: platform 16; camera  0; f 0.549x0.549; poses 1
14:51:11 [App     ] Camera model loaded: platform 17; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 18; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 19; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 20; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 21; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 22; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 23; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 24; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 25; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 26; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 27; camera  0; f 0.550x0.550; poses 1
14:51:12 [App     ] Camera model loaded: platform 28; camera  0; f 0.550x0.550; poses 1
14:51:12 [App     ] Camera model loaded: platform 29; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 30; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 31; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 32; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 33; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 34; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 35; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 36; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Camera model loaded: platform 37; camera  0; f 0.549x0.549; poses 1
14:51:12 [App     ] Scene loaded from interface format (352ms):
    38 images (38 calibrated) with a total of 929.78 MPixels (24.47 MPixels/image)
    182793 points, 0 vertices, 0 faces
14:51:12 [App     ] Found a camera not pointing towards the scene center; the scene will be considered unbounded (no ROI)
14:51:12 [App     ] Point-cloud composed of 182793 points with:
 - visibility info (566797 views - 3.10 views/point):
            0 points with 1- views (0.00%)
       102477 points with 2  views (56.06%)
        37724 points with 3  views (20.64%)
        42592 points with 4+ views (23.30%)
    2 min / 3.10076 mean (1.89295 std) / 19 max
14:51:14 [App     ] Preparing images for dense reconstruction completed: 38 images (2s373ms)
14:51:14 [App     ] Selecting images for dense reconstruction completed: 38 images (72ms)
Estimated depth-maps 38 (100%, 1m14s602ms)       
Geometric-consistent estimated depth-maps 10 (26.32%, 1m25s, ETA 4m)...Segmentation fault (core dumped)

Although it says there are 38 depth maps estimated, there is only one. Anything I have done wrong? I also tried other datasets in the ETH3D, but none could work.

I test if I only use one intrinsic (change the json file manully) then both openmvg and openmvs look fine, I can get a good-looking mesh. My current guess is either the openMVG_main_openMVG2openMVS has a problem, which cannot convert all the required cameras intrinsic information correctly to the openmvs, but this is just my guess. Any help is welcomed!

cdcseacave commented 1 year ago

pls share the reconstruction done with OpenMVG (original images and the found scene)

zhaozhongch commented 1 year ago

HI Thanks for the help! I add the dataset and some outputs from openmvg (input json file, output ply file, sfm_data.bin) and .mvs file after converting to the google drive link here. https://drive.google.com/drive/folders/19eVrhGjHwZoEHwi3Ug1rQIk_91Ow1s7P?usp=sharing I simply download the dataset from the ETH3D dataset meadow Please let me know if anything more is needed to debug!

cdcseacave commented 1 year ago

it seems you set ID to 0 for all images, that will create a lot of problems, pls fix that first

zhaozhongch commented 1 year ago

Sorry, I am unsure what you mean by image id is "0"? The image id in the data.zip folder is different. In the json file they are also different. Do you mean the image id in the mvs file?

cdcseacave commented 1 year ago

yes, in MVS file