openMVG / openMVG

open Multiple View Geometry library. Basis for 3D computer vision and Structure from Motion.
Mozilla Public License 2.0
5.7k stars 1.67k forks source link

[Question] Parameters for benchmarking #1162

Closed Ntweat closed 6 years ago

Ntweat commented 6 years ago

Hi

I am trying to benchmark many sparse point tools such as openMVG, colmap, MVE, etc along with dense point tools such as openMVS, SMVS, CMVS/PMVS, etc

I was wondering what parameters can I use for the benchmarking for both.

pmoulon commented 6 years ago

From OpenMVG (SFM side), you can only play with the pipeline and the matches (feature extraction presets).

Pipeline

Feature preset (ComputeMatches):

Ntweat commented 6 years ago

Sorry.

I meant to ask parameters for benchmarking.. Like number of camera pose estimates, completeness of model etc

pmoulon commented 6 years ago

You can take a look to the paper that perform benchmarking task... https://github.com/openMVG/awesome_3DReconstruction_list#mvs---point-cloud---surface-accuracy

Ntweat commented 6 years ago

Thank you

pmoulon commented 6 years ago

SfM side:

MVS Side:

Ntweat commented 6 years ago

During a preliminary analysis, I am finding that incremental pipeline is giving more camera pose estimates than global.

Why is that?

Also, COLMAP is giving more accurate pose estimates than openMVG.
I have sent a dataset, it has both the above points

pmoulon commented 6 years ago

I invite you to read here about the difference between incremental SfM and Global SfM

Which process did you use to compare the pose accuracy? How much more accurate COLMAP is for your given dataset? Did you check that if the number of feature is similar? Are you sure you compare the SfM tools in the same settings (grouped vs. ungrouped intrinsics)?

I see nothing after your comment:

I have sent a dataset, it has both the above points

Ntweat commented 6 years ago

I had sent a dataset on the email id you had given for https://github.com/openMVG/openMVG/issues/1149 (If you did not get it I will send it again)

All 3 pipelines had the same settings: (oepnMVG incremental, Global and COLMAP) SIFT Exhaustive Matcher (FLANN matcher) Ungrouped intrinsics

The accuracy can be seen in sparse point clouds produced. The dataset I sent is of my car, in that in incremental openmvg there is shift which produces a ghost windscreen during dense point reconstruction.

pmoulon commented 6 years ago

Did you check if the pipeline have more or less the same of features extracted per image?

Please remember that cars are hard object to do in photogrammetry since they are not lambertian (reflective object are hard to handle).

Ntweat commented 6 years ago

I used the same SIFT parameters for all pipelines. All of them gave more or less same features.

openMVG Global and Incremental had the same features as the next stage (compute_matches) the pipeline changes.

I know cars/vehicles are tough, I am very close to getting good results, with openMVG Incremental pipeline. The issues are only in few objects like the car I sent. Otherwise, openMVG Incremental is out performing COLMAP.