Closed jac08h closed 1 year ago
Thanks for the report. The problem is that OpenCV drawing code stopped working at some point.
inliers_mask.astype(int).ravel().tolist()
is wrong. when you convert bool (the correspondence is true or not) to int, you instead of filtering, index the points.
inliers_mask.astype(bool).ravel().tolist()
is correct in principle, but you have to play with OpenCV to make it work.
Or - better - switch to kornia_moons visualization, as in this tutorial:
https://kornia-tutorials.readthedocs.io/en/latest/image_matching_adalam.html
Thank you for the response.
The example with KeyNet-AdaLAM works for me - I get the same visualization and also the same amount of tentative matches and inliers.
However, using kornia_moons visualization does not fix the problem with MKD and TFeat descriptors - it displays the same matches: Therefore, I believe the problem is in the matches/descriptors themselves. For example, fewer inliers are found in my run.
@jac08h could you please share updated colab?
Here is the version using kornia_moons visualization. To use it, I modified get_local_descriptors
to return LAFs as well.
@jac08h I don't know why, but you are using very old version of kornia-examples.
The key difference is in current version we have:
timg = K.color.rgb_to_grayscale(K.image_to_tensor(img, False).float()/255.)
Whereas in yours:
timg = K.color.rgb_to_grayscale(K.image_to_tensor(img, False)/255.)
We have dropped auto-conversion in K.image_to_tensor
in kornia
to float long time ago, so when image is divided by 255, it essentially becomes binary image.
After the fix above:
You are right, that was the issue. Thank you so much for your quick help.
Hello,
I tried to run MKD_TFeat_descriptors_in_kornia.ipynb, but my results differ significantly from the example notebook. Most importantly, using deep-learning-based descriptors does not yield better matches.
I did a single change to the example code: Instead of
matchesMask = inliers_mask.ravel().tolist(),
I useinliers_mask.astype(int).ravel().tolist()
, because the former was throwing a cv2.error - "Can't parse 'matchesMask'. Sequence item with index 0 has a wrong type". Package versions:The output is in this notebook.
Do you have an idea what is causing the different behaviour? Thanks a lot for your time!