simonfuhrmann / mve

Multi-View Environment
http://www.gcc.tu-darmstadt.de/home/proj/mve/
Other
980 stars 420 forks source link

I want know how to change the code of sfm if I don't want normalize the feature positions? #474

Closed dongmiller closed 5 years ago

dongmiller commented 5 years ago

Hi, In the sfm code, there is a operation of normalizing the feature positions, but I don't want normalize it, what should I change of the code of sfm part? Thanks.

simonfuhrmann commented 5 years ago

Can you be more precise what you want to achieve? By default, sfmrecon doesn't normalize the scene. You can pass --normalize if you want to normalize the scene. Otherwise, the scene will be in that coordinate system that happens to be created depending on the initial pair and bundle adjustment optimizations.

dongmiller commented 5 years ago

Can you be more precise what you want to achieve? By default, sfmrecon doesn't normalize the scene. You can pass --normalize if you want to normalize the scene. Otherwise, the scene will be in that coordinate system that happens to be created depending on the initial pair and bundle adjustment optimizations.

Thank you for your reply, I just wonder why should we do the viewport->features.normalize_feature_positions (line 72 in bundler_features.cc), can I delete it? I don't find out where will be influenced in matching process, but it can't work if I delete this code. By the way, can I ask another question? At first, I have calibrated my camera,the intrinsic matrix is(1413,0,979;0,1404,546;0,0,1), (and my camera are using OV4689 OmniVision sensors.)And I want to use my know intrinsic matrix to run the sfmrecon app. Should I set the camera.flen = 1.413, camera.ppoint[0] = 0.979, camera.ppoint[1]=0.546, and the camera.paspect = 1? Look forward your reply.

simonfuhrmann commented 5 years ago

After feature detection, the feature coordinates are in image space, i.e., from 0 to width/height of the image. For the purpose feature matching and bundle adjustment, it's more convenient and numerically more stable to work with normalized feature position, where coordinate (0,0) is the principal point of the image, and coordinates are between -0.5 and 0.5. This is required as early as in feature matching, because matching computes the fundamental matrix between the two images for outlier filtering, which requires normalized coordinates.

I am not familiar with that tool and its conventions, but MVE's focal length is normalized, i.e., the focal length in pixels divided by max(width,height) of the image. Same for the principal point, i.e., principal point in pixels divided by width and height for x and y respectively.