alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11k stars 1.07k forks source link

Meshroom can't estimate my second camera's images, but why does it work well alone? [question] #2132

Open JohannSuarez opened 1 year ago

JohannSuarez commented 1 year ago

I have a DJI Mini 2 and a Canon EOS M50 Mark 1 that I both used for taking photos of a subject. The intention of using two cameras is that the drone is for scanning the whole body, while the Canon is for taking more high-res pictures of the subject's head. When I run the Canon M50's images alone, they are estimated reasonably well with some images failing to be estimated

Canon M50: m50

I ran the drone's images alone as well, and all frames were esimated without failure, as you can see here.

Drone images: drone

But when I run both image sets through the pipeline at the same time, not a single one of the drone images get estimated. Here's a pic: only_the_m50

Why can't I use both image sets simultaneously? They all have their focal lengths in their metadata. The drone images are 4mm and the M50 images are 30mm.

msanta commented 1 year ago

The close up photos have a shallow depth of field, so the background is not very sharp. Now if you try to add the distance photos it could be that there are not enough matches found in the background to tie the two sets together. The person themselves is not covering enough of the image area to allow robust matching to be obtained.

You could try to take some photos between the closeups and distance to act a bridge between the two sets.

The other issue you will have is that the subject will be moving, even a tiny bit, between photos.

natowi commented 1 year ago

I would try something like this: Use the drone dataset reconstruction with the 15 matches from the close-up. Then do a second reconstruction pipeline with the close-up images. Use a common image that is matched in both datasets to align both SfMs to each other (SfMAlignment) node.

If the close-up reconstruction is not so good, you could try masking out the background (best generate masks for the background using some other tool).

tak-ho commented 1 year ago

I would try something like this: Use the drone dataset reconstruction with the 15 matches from the close-up. Then do a second reconstruction pipeline with the close-up images. Use a common image that is matched in both datasets to align both SfMs to each other (SfMAlignment) node.

  • Alignment Method:
  • from_cameras_viewid: Align cameras with same view Id
  • from_cameras_poseid: Align cameras with same pose Id
  • from_cameras_filepath: Align cameras with a filepath matching, using ‘fileMatchingPattern’

If the close-up reconstruction is not so good, you could try masking out the background (best generate masks for the background using some other tool).

I just read this thread that may related to my situation too #2157 Is this Augment Reconstruction? Do I need to enable "Lock Scene Previously Reconstructed"

Thanks!

natowi commented 1 year ago

No, this (SfMAlignment) is not equal to Augment Reconstruction. SfMAlignment aligns two individual reconstructions based on common information like same camera filepath. This is good for aligning datasets that initially fail to reconstruct as one for various reasons. Both datasets will reconstruct individually and then be merged together.

Augment Reconstruction expands the existing reconstruction and adds additional views to the scene. This is usually used in the "live" reconstruction.

tak-ho commented 1 year ago

Thanks! @natowi then it seems sfmAlignment is not suitable for my case...