alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11.04k stars 1.07k forks source link

[bug] Wavy output #606

Closed Baasje85 closed 4 years ago

Baasje85 commented 5 years ago

Describe the bug We are trying to reconstruct a building, having straight lines in its masonry. The (textured) results from Meshroom look as if the pattern waves. We had to increase downscale of the depthmap node to 4, otherwise it would not complete. Is there something that could be done in the mesh filtering stage?

To Reproduce Dataset is available on request.

Expected behavior We would expect straight lines, to retain straight lines.

Screenshots image

image

Desktop (please complete the following and other pertinent information):

natowi commented 5 years ago

It looks like a problem with feature detection/matching of the repeated/symmetric pattern (similar https://github.com/alicevision/meshroom/issues/605). The "waves" come likely from forced fused not matching image pairs. So the algorithms detected features, but due to the symmetry they were misaligned.

What algorithms/settings do you use that differ from the default pipeline?

You should avoid capturing images of the wall only. Here is something you could try: place a sign pole with a well featured sign in front of the wall, keeping some distance to avoid shadow casting and then capture images including the sign post. This would improve feature detection and matching and since you did not place anything on your wall, there are no artefacts. The sign could later be removed from the mesh.

Baasje85 commented 5 years ago

It looks like a problem with feature detection/matching of the repeated/symmetric pattern (similar #605). The "waves" come likely from forced fused not matching image pairs. So the algorithms detected features, but due to the symmetry they were misaligned.

There are no images from only the walls, everything has additional context.

thumbs

What algorithms/settings do you use that differ from the default pipeline?

Force CPU Extraction: off Guided Matching: on Min Observation For Triangulation: 3 Prepare Dense Scene Outpuf File Type: jpg Save Metadata: off Save Matrices Text Files: on Depthmap Downscale: 4 (To prevent errors)

Everything else is standard, experimented with "Keep Only The Largest Mesh" an "Smooth Iterations: 20"

natowi commented 5 years ago

There are no images from only the walls, everything has additional context.

Ok, I thought the images were from the side wall of the church.

Do you use SIFT only or also AKAZE? (https://github.com/alicevision/meshroom/wiki/Reconstruction-parameters)

Baasje85 commented 5 years ago

Do you use SIFT only or also AKAZE? (https://github.com/alicevision/meshroom/wiki/Reconstruction-parameters)

At this moment only SIFT. Would you suggest the combination of AKAZE and SIFT or replace SIFT with AKAZE?

natowi commented 5 years ago

You can try adding AKAZE to get more robust results (in DescriberTypes, FeatureExtraction, FeatureMatching and StructureFromMotion). I´d suggest to try this on a subset of your images to evaluate the results and save time. You can use the new Features Viewer to see the detected features. https://github.com/alicevision/meshroom/pull/539

Baasje85 commented 5 years ago

@natowi see #607

natowi commented 5 years ago

You need to enable Sift + Akaze in DescriberTypes for FeatureExtraction, FeatureMatching and StructureFromMotion. Sift should always stay active.

Enabling akaze in FeatureMatching only for example results in an error.

You can choose to use one or multiple describer types. If you use multiple types, they will be combined together to help get results in challenging conditions. The values should always be the same between FeatureExtraction, FeatureMatching and StructureFromMotion. The only case, you will end up with different values is for testing and comparing results: in that case you will enable all options you want to test on the FeatureExtraction and then use a subset of them in Matching and SfM.

Baasje85 commented 5 years ago

You need to enable Sift + Akaze in DescriberTypes for FeatureExtraction, FeatureMatching and StructureFromMotion.

This has been done, still the failure of #607 starts when additionally enabling the AKAZE. Could you validate if this works for you?

Sift should always stay active.

I don't think this statement is true. Any of the describer types should work if used consistently in all nodes.

natowi commented 5 years ago

This has been done, still the failure of #607 starts when additionally enabling the AKAZE. Could you validate if this works for you?

For me this does work with the Monstree dataset. Can you share some of your images for testing?

Sift should always stay active.

I don't think this statement is true. Any of the describer types should work if used consistently in all nodes.

Using akaze only in FeatureExtration causes ImageMatching error for the Monstree dataset.

https://github.com/alicevision/meshroom/issues/340#issuecomment-451654751

The vocabularytree used has been learned on SIFT features, so currently the ImageMatching is hardcoded to use SIFT descriptors only. So if you use AKAZE alone you cannot use the ImageMatching. If your dataset is not too large, you can disable the usage of the vocabularytree by setting Nb Matches to 0 (but then it will compute the matching between all images pairs.

But you should be able to use the ImageMatching if SIFT and AKAZE are used in combination (both checked on the FeatureExtraction node).

Baasje85 commented 5 years ago

For me this does work with the Monstree dataset. Can you share some of your images for testing?

Just tried the image that was the last line in the log file, which on individual basis is going through AKAZE without issues. The only difference: restarting Meshroom.

So I am now runing SIFT+AKAZE.

The vocabularytree used has been learned on SIFT features, so currently the ImageMatching is hardcoded to use SIFT descriptors only.

Is there any description on how the vocabularytree works?

natowi commented 5 years ago

Is there any description on how the vocabularytree works?

You can read the original paper here: http://www.ipol.im/pub/art/2018/199/

Every image is matched against its visual nearest neighbors using a vocabulary tree with spatial re-ranking.(*)

Baasje85 commented 5 years ago

With SIFT+AKAZE the same number of photos are reconstructed 180 out of 183. The number of features doubled from 49169 to 109498. Will update the ticket when it is textured.

Baasje85 commented 5 years ago

image

Not really an improvement.

curtthemartian commented 5 years ago

Can you show us your mesh? You may get better results if you add 40 or so iterations on the MeshFiltering Smoothing iterations setting. I tried changing the lambda, it got much worse if you go over 1.0 I have yet to try smaller. But from my time doing Computational fluid dynamics in college. I wonder how this Lambda actually works in this calculations math.

curtthemartian commented 5 years ago

Oh and thanks everyone for suggesting the use of Akaze toggle. It really helped get better results in my dining room test. I will be running a new simulation of my house with that added on here in the next few days.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] commented 4 years ago

This issue is closed due to inactivity. Feel free to re-open if new information is available.