simonfuhrmann / mve

Multi-View Environment
http://www.gcc.tu-darmstadt.de/home/proj/mve/
Other
977 stars 419 forks source link

Dense reconstruction with Blender cameras #492

Closed donlk closed 4 years ago

donlk commented 4 years ago

Hi folks, I'm not sure if this is or the SMVS topic is the right one for this issue, but i figured you might also be able to help me with this. I've set up a scene in Blender with an object at the origin and a camera to rotate around it. I've rendered out several frames while rotating the camera around 10 degrees, and exported the camera positions from Blender using this script. I've set up an MVE scene based on the rendered frames and the exported cameras and i'm trying to reconstruct it using SMVS (in this case without any SFM points obviously). The problem is, MVE puts out either an empty point cloud every time or a one with weirdly placed points. I suspect the issue might be in the camera conversion. I'm not sure what coordinate system MVE is using, what rotation and translation matrices it expects. Maybe you can help me with what i'm doing wrong.

simonfuhrmann commented 4 years ago

Hi there, the camera conventions are documented in the math cookbook here: https://github.com/simonfuhrmann/mve/wiki/Math-Cookbook Also note that, while it is not uncommon to express focal length and principal point in pixels, MVE uses a normalized convention. For example, a photo taken with a 70mm lens on a 35mm sensor, you'd get a normalized focal length of 70mm/35mm = 2. Similarly, the principal point is normalized, and a perfectly centered principal point would be (0.5, 0.5).

I hope this helps.

donlk commented 4 years ago

Yes, it helped a lot, thank you. Also, the reason i used SMVS is that it is capable of producing dense cloud without an SFM scene. The technique it uses is called Semi-global Matching. Problem is, a range needs to be given for a so called initial depth sweep, and this range is sad to be calculated from the sfm scene. Could you help me in understanding in what that is and how it can be computed/estimated without an sfm scene prior? I know this is an MVE thread but the two projects seem to have things is common, so i figured i take my chances here as well.

flanggut commented 4 years ago

If you don't have an initial point cloud from SfM you have to guess the depth range of the scene. This mostly depends on the camera poses and the content of the scene, but it does not need to be super accurate. If your scene has a "real life" scale, i.e. 1 unit is 1 meter, I would simply choose something around 0.2 - 50, which should cover basically anything.

simonfuhrmann commented 4 years ago

Another idea is to create SfM points without changing the cameras. Check apps/featurerecon. I'm closing this for now. Feel free to reopen.