cdcseacave / openMVS

open Multi-View Stereo reconstruction library
http://cdcseacave.github.io
GNU Affero General Public License v3.0
3.37k stars 911 forks source link

Question about results for Middlebury and ETH3D #354

Open kristinpro opened 6 years ago

kristinpro commented 6 years ago

Hello,

Sorry to post it as an issue but I didn't figure out how to correctly register a question over here.

I was wondering if you can give me some insights on your results for Middlebury and ETH3D benchmarks?

It is important for me to reproduce the ranking of the OpenMVS with respect to other few MVS methods because when I do evaluation on my data, the ranking is different. Precisely, in my evaluation, OpenMVS is on the top of the list while it is generally behind of the COLMAP in both Midllebury and ETH3D.

So, I would like to understand why this happens. Is the data I work with is 'easy' for OpenMVS and really challenging for COLMAP and how the choice of parameters affects the result on my data compared, for instance, to the Dino sequence from Middlebury.

Thus, I thought if I can reproduce the reconstruction of the sequences from the benchmarks and assess it visually with wahat I see in the MIddlebury (and ETH3D) then I can at least make some conclusion regarding parameters used for that type of data.

I am particularly interested in the following :

  1. The Middlebury data comes with ground truth camera poses but no sparse point cloud while OpenMVS requires as an input both + undistorted images. How did you prepare the input data from Middlebury to fit the input of the OpenMVS?

  2. Did you use the default set of parameters to obtain the reconstruction of Dino and Temple sequences or was it a different set of parameters for each? I'd appreciate if you can reveal these parameter sets.

  3. In the ETH3D benchmark the results of OpenMVS are only available for the testing datasets for which the visualisation of the ground truth model is not on the website of the benchmark neither it can be download as a .ply file (I didn't find it). Does this mean that you have not used at all the training dataset from ETH3D?

  4. Did you obtain the results on the testing sequences of ETH3D by simply running OpenMVG + OpenMVS pipeline with default parameters or was it a different set of parameters? I'd appreciate if you can reveal this parameter set.

Thank you

cdcseacave commented 6 years ago

Hi @kristinpro,

Not sure I understand exactly the issue: you are getting better or worse results, or simply different? and on which dataset?

  1. I used an internal SfM pipeline I develop to generate the sparse point cloud keeping the cameras fixed; you can try the same with other pipelines, like OpenMVG (but not sure how or even if possible)

  2. default params if i remember right (or maybe used only 2 scales instead of default 3), but for sure the same params to all datasets, that was a requirement of the test

  3. Indeed, I did not use the training datasets at all; I didn't have time to prepare in any way for this contest, I simply run OpenMVS and submit the results; I think considerably better results can be obtained for ETH3D if I adapt a bit the fusion code to exploit the way the score is computed; currently I get a very low accuracy score due to some outlier points in front of the true surface, and these can be relatively easy filtered out

  4. No, I converted the COLMAP SfM poses to OpenMVS format and used that.

kristinpro commented 6 years ago

Oh, thanks for the quick reply, @cdcseacave =)

I am getting better results with OpenMVS on my data compared to COLMAP. This, however, makes me feel a little suspicions because COLMAP's results on the benchmarks are better compared to OpenMVS.

In my work I deal with a reconstruction of human baby mannequin - a weakly textured object.

So, as I am to make a choice on what to consider for the development for the reconstruction method in the project I work on. Thus, I need to make sure that the best performing method within my evaluation (which is currently OpenMVS) is well justified.

One more question:

Thanks

cdcseacave commented 6 years ago

Sorry, my SfM pipeline is not opensource, nor I plan to.

pmoulon commented 6 years ago

OpenMVG can import middlebury and ETH3D data. Then you can compute features and matches and triangulate the observations and keep the camera parameter as fixed. See main_SfMInit_ImageListingFromKnownPoses in https://github.com/openMVG/openMVG/tree/develop/src/software/SfM/import and then you can export the scene to OpenMVS.

kristinpro commented 6 years ago

Thanks, @pmoulon for pointing to 'main_SfMInit_ImageListingFromKnownPoses'. I din't think of it on the first place. I will try this.

cdcseacave commented 5 years ago

A new version of the library was released, you can retry with it.