cavalli1234 / AdaLAM

AdaLAM is a fully handcrafted realtime outlier filter integrating several best practices into a single efficient and effective framework. It detects inliers by searching for significant local affine patterns in image correspondences.
BSD 3-Clause "New" or "Revised" License
309 stars 42 forks source link

Questions for the parameters modification? #15

Closed lin-name closed 1 year ago

lin-name commented 2 years ago

Hello! Thank you very much for releasing the implementation of this work. The runtime & performance are quite impressive, and it works well for me for the image feature matching.

but how can i modify the parameter listed in the 'DEFAULT_CONFIG' of the 'class AdalamFilter: ', i have tried to modify it, but it does not work for the output.

The default parameter list: ![Uploading AdaLAM1.PNG…]()

The parameter list after modified: ![Uploading AdaLAM2.PNG…]()

lin-name commented 2 years ago

The default parameter list: AdaLAM1

The parameter list after modified: AdaLAM2

cavalli1234 commented 2 years ago

Hello, thank you for you interest in AdaLAM, I'm happy to know it works well for you!

The expected way to customize parameters is by passing a dictionary at initialization, like:

matcher = AdalamFilter({'area_ratio': 300, 'min_confidence': 300, 'orientation_difference_threshold': 90})

This would let you create a filter instance with custom parameters, without changing the default behavior!

Nonetheless, even when effectively changing the defaults it should see the change in principle. One possibility for it to behave as you observed, would be if you cloned this repository, pip installed it into a conda environment, and then changed the file without refreshing the installation.

Whatever the reason, it should work if called as above!

lin-name commented 2 years ago

Thanks for your detailed reply ! I understood the parameters modification question by passing a dictionary at initialization yesterday. Basically, the purpose of modifying the default parameters is to obtain much more matching feature points between the source image and the destination image, like the images below: AdaLAM_test_image

as we can see,there is a doll model and a turntable in this image, and the left part is the source image, the right is the destination image. Compared with the building with rich corner points in the example you provided in the below, the doll's head and body are relatively smooth in my own image, how can i get much more matching feature points? AdaLAM_example_image AdaLAM_example_image_1

I sincerely hope to get your advice on which parameters to be adjusted? Any other feasible and effective methods are also accepted. Looking forward for your reply.

cavalli1234 commented 2 years ago

This doll looks rather challenging for reconstruction!

I see two major challenges here that you could address:

  1. The doll has very poor texture, and seems to be even a bit shiny in some parts (which creates a misleading apparent texture!). Here the problem comes even before the matcher, to the level of keypoint detection and description. If you have control over the scene, I would make sure to have a very diffuse and soft lighting, that possibly does not move relatively to the doll, so that it casts fixed self-shadows and brings out the roughness of the material. In any case, you can experiment with several keypoint detectors and their settings to try to have as many as possible detections on the doll, because poor texture will generally lead to little keypoints since the beginning.
  2. The doll is extremely non-planar in shape. This is generally a problem for descriptors, since the appearance of the surface will change nonlinearly with viewpoint, and most descriptors are designed/learned with at most homography warps in mind (i.e. what you see by changing viewpoint while looking at a planar texture). This means that your descriptors will likely be quite non-robust on the doll with respect to viewpoint change, so you want to have a very dense footage rotating around slowly. Luckily, the turntable is planar, so the opposite holds, and relative camera poses can be estimated at least from those matches that will help a lot. For the case of using AdaLAM, this also influences the matching stage, since we are also checking for local affine consistency of matches here. This might work very well on the turntable, but it will see much more deviation from the affine model on the doll itself. You can try to increase the tolerance to deviation from the affine model by lowering the 'min_confidence' parameter on this regard. Also, orientation alone could be a bit less meaningful with such distortions, so you might increase the 'orientation_difference_threshold' a bit. Then 'area_ratio' and 'search_expansion' can possibly be tuned to different densities of keypoints in principle, however, in my own experiments the same parameters were optimal for both 8k and 2k keypoints per image, but maybe try some tuning if you have much less than 2k keypoints. The opposite would hold again, if you were interested in matching only on the turntable for the sake of camera poses! In case you get too many outliers in the turntable with loose settings, you can first get and filter out the turntable matches with strict settings and process the rest with loose settings.

I hope this helps!