vcg-uvic / lf-net-release

Code Release for LF-Net: Learning Local Features from Images
Other
314 stars 66 forks source link

Find Matches #11

Open IddoBotzer opened 5 years ago

IddoBotzer commented 5 years ago

Hi, I was able to run the net test and extract features from my images, now I'm looking to see the matching. Is the only way to do so is to run the demo with the notebook?

If so, what do I need to change in order to run it on my own data? (I have depth images as well)

Thank you

framanni commented 5 years ago

Hi, same question!

Thank you

kmyi commented 5 years ago

Hi, the notebook demos are to provide a detailed example of how the input and output structures are. For example, as per the readme, you can run

python run_lfnet.py --in_dir=images --out_dir=outputs

which is actually how we use this codebase these days.

framanni commented 5 years ago

Hi, how is possible to see the matching on my own data, (after running the code above)? Thanks

framanni commented 5 years ago

In LIFT you provide the function to convert keypoints in open cv2.keypoints. Is there any support on that?

abduallahmohamed commented 5 years ago

I have the same issue, after running the code above how to show the correspondence between the images?

framanni commented 5 years ago

Is there any possibility to solve this?

GulyaevMaxim commented 5 years ago

In LIFT you provide the function to convert keypoints in open cv2.keypoints. Is there any support on that?

pts = outs['kpts']
pts = [cv2.KeyPoint(pts[i][0],
pts[i][1],
1)
for i in range(pts.shape[0])]
zvadaszi commented 5 years ago

Hi, I would like to calculate the homography between two images using the key points returned by LF-Net (for example with opencv's findHomography). So far I was able to extract the detection endpoints, extract the keypoint patches and get the kpts2_corr list. How to filter for the best matches in order to filter out unwanted correspondences?

Thank you.

ApeCoding commented 4 years ago

Hi, I was able to run the net test and extract features from my images, now I'm looking to see the matching. Is the only way to do so is to run the demo with the notebook?

If so, what do I need to change in order to run it on my own data? (I have depth images as well)

Thank you

Hi, i have the same question, did you finish this question? without run demo in jupyter or use it on my own data thank you!

BruceWANGDi commented 3 years ago

@IddoBotzer Hi, I prepared to run the feature-extracted demo, my command written below: python run_lfnet.py --in_dir=1_1.jpg --out_dir=output but there is an error occured: "ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory". Could you give me your command example? And another question is that: Have you found any solution to run the matching demo “demo.ipynb" on your own data?

Looking forward to your reply, Thank you.

kmyi commented 3 years ago

Hi. The error is just telling you that you don't have cublas installed properly. This is an environmental configuration issue.