mdaiter / openMVG

openMVG with a LATCH descriptor, an ORB descriptor, DEEP descriptors from the cvpr15compare repo, PNNet/Torch loader and a GPU-based L2 matcher integrated
Other
31 stars 20 forks source link

LATCH descriptor does not seem to work well on my dataset #5

Open LingyuMa opened 8 years ago

LingyuMa commented 8 years ago

@mdaiter I have tried the LATCH binary and LATCH unsigned, it seems both of them work very well on my image set (it seems there are less features detected and much less matches found), is there any suggestions to tune the parameters or something?

LingyuMa commented 8 years ago

Also, the matching is still not fast even I use GPU_LATCH,is there any way to speed it up?

mdaiter commented 8 years ago

@LingyuMa what's your dataset and what numbers are you getting?

mdaiter commented 8 years ago

@LingyuMa I also set the ratio parameter to 0.99 when matching: binary descriptors are sensitive to those sorts of changes, and these fluctuations can seriously kill matching ability.

LingyuMa commented 8 years ago

I have also changed that to 0.99

LingyuMa commented 8 years ago

I have attached the image set and matches.bin, can you have a look? https://drive.google.com/file/d/0BwWAt5w3811WdG16Z1ZKRGpVTFk/view?usp=sharing https://drive.google.com/file/d/0BwWAt5w3811WV1U5Wm9hTEdGTk0/view?usp=sharing

LingyuMa commented 8 years ago

@mdaiter What parameters are you using for matching?

mdaiter commented 8 years ago

@LingyuMa I'm just using -r 0.99 . That's it...hm. How many putatives do you get, and how many geometrics? Are you using -g e or -g f?

LingyuMa commented 8 years ago

@mdaiter Can you run my dataset in your computer (to see what will happen)? I am using fundamental matrix for filtering, so it is -g f. I have attached my matches.f.bin, not sure how to check it.

mdaiter commented 8 years ago

./bin/openMVG_main_exportMatches -i outputLatch/sfm_data.json -d outputLatch -m outputLatch/matches.putative.bin -o matches will give you all of your matcher data back and export it to svgs. Curious to see the numbers.

LingyuMa commented 8 years ago

it seems it gave me a bunch of svg to show matches, is there a way to show the total number?

LingyuMa commented 8 years ago

@mdaiter for matches.f.bin, All I can say is that it gave me 136 image pairs. The pairs seem to be reasonable, though the number is 49.5MB, though it is much less than the sift (209.4MB), I can also see since the matching becomes sparse when it comes to LATCH

mdaiter commented 8 years ago

The total number between each pair appears right before the end of the SVG file. x_yn.svg is the format, where x is the first ID of the image, y is the second ID of the image, and n is the number of matches between images

LingyuMa commented 8 years ago

akaze latch Here are the two output matches svg files screenshots

LingyuMa commented 8 years ago

@mdaiter the second image is LATCH

LingyuMa commented 8 years ago

I know it is hard to see, but the number of image pairs is about 5 times less.

mdaiter commented 8 years ago

@LingyuMa Can you send me the SIFT matches that align with the LATCH matches? It seems as though the SIFT matches compare sets whose LATCH equivalents aren't visible from your screenshots

LingyuMa commented 8 years ago

The problem is that the matched images are not the same for the two descriptors. I'll see what I can do when I come back from lunch.

Sent from my iPhone

On Jul 22, 2016, at 11:11 AM, Matthew Daiter notifications@github.com wrote:

@LingyuMa Can you send me the SIFT matches that align with the LATCH matches? It seems as though the SIFT matches compare sets whose LATCH equivalents aren't visible from your screenshots

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

LingyuMa commented 8 years ago

@mdaiter Can you have a look at these two photos selection_005 selection_004

LingyuMa commented 8 years ago

the first one is Latch

mdaiter commented 8 years ago

@LingyuMa these seem correct. Maybe @csp256 (original author of the library) could provide some insight, but I believe these are the results you should be receiving back from each image.

LingyuMa commented 8 years ago

But the matching number seems much less than SIFT, which makes the global construction fails. Is there a way to increase matching number?

Sent from my iPhone

On Jul 25, 2016, at 5:08 AM, Matthew Daiter notifications@github.com wrote:

@LingyuMa these seem correct. Maybe @csp256 (original author of the library) could provide some insight, but I believe these are the results you should be receiving back from each image.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

mdaiter commented 8 years ago

@LingyuMa if you modify these two parameters: https://github.com/mdaiter/cudaLATCH/blob/cf05a8fdf19b83519e68cc0c184e334f83be18e5/params.hpp and here: https://github.com/mdaiter/openMVG/blob/custom/src/openMVG/matching_image_collection/gpu/params.hpp you'll be able to tune matching threshold and total allowed points to detect. Each increment in NUM_SM gives back 512 more key points.

LingyuMa commented 8 years ago

Also, I have found that the matching time is still pretty slow compared with the openMVG default matching method+SIFT. Which is really weird, is there any way to accelerate it?

Sent from my iPhone

On Jul 25, 2016, at 8:55 AM, Matthew Daiter notifications@github.com wrote:

@LingyuMa if you modify these two parameters: https://github.com/mdaiter/cudaLATCH/blob/cf05a8fdf19b83519e68cc0c184e334f83be18e5/params.hpp and here: https://github.com/mdaiter/openMVG/blob/custom/src/openMVG/matching_image_collection/gpu/params.hpp you'll be able to tune matching threshold and total allowed points to detect. Each increment in NUM_SM gives back 512 more key points.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

mdaiter commented 8 years ago

@LingyuMa if you're using the LATCH_UNSIGNED method, I'd use the GPU_LATCH matching method; otherwise, you're technically comparing two different fundamental ways of matching. With SIFT, you'd have to run the BRUTE_FORCE_MATCHER_L2 in order to perform a fair comparison. I have the numbers on my computer, and it's far slower than the BRUTE_FORCE_HAMMING matcher.

LingyuMa commented 8 years ago

@mdaiter the problem is I am using LATCH_UNSIGNED + GPU_LATCH and I compared it with SIFT + ANNL2, the speed does not improve.

csp256 commented 8 years ago

Something is definitely up. The number of matches is what I would expect sometimes (10k ish), but much lower the rest (<1k). I am interpreting this as my code working, and something upstream being broken.

I really do not think that the ratio test makes sense in Hamming space: as a first order improvement you should impose a hard threshold between best and second best matches. This is done on the GPU matcher.

If the CPU matcher is slow you are probably being bit by the Intel popcount bug. Can you try the GPU matcher?

mdaiter commented 8 years ago

Agreed with @csp256 . I'm curious: what is your total number of matches putatively? You can check by running the exportMatches command with matches.putative.bin instead of matches.f.bin

mdaiter commented 8 years ago

@LingyuMa if you're looking for a GPU Brute Force L2 matcher, I just finished one up and should be pushing code either today or tomorrow. It's on the default OpenCV version of the GPU matcher, but I'm implementing a CUDA dynamic parallel solution at the moment and will inform you when it's ready.

mdaiter commented 8 years ago

@LingyuMa my GPU L2 Brute Force matcher is now finished. Feel free to use it with SIFT, PNNet, LATCH, DeepSiam2Stream or DeepSiam

pmoulon commented 8 years ago

Did you try to extract LATCH descriptors on SIFT keypoints? Since there is a "clean" SIFT integration pending... it could be easy to test: https://github.com/openMVG/openMVG/issues/556 We can also test the LATCH descriptor on Affine detector (we can extract rectified patch regions and compute the descriptor on it). See there for Affine patch normalization: https://github.com/openMVG/openMVG/blob/master/src/openMVG_Samples/features_affine_demo/features_affine_demo.cpp (only the rotation invariance is missing, compute rectified patch rotation, and then rotate it)

mdaiter commented 8 years ago

@LingyuMa and @pmoulon if you look on the Oxford testing branch, you'll have all of that code already integrated SIFT Keypoints: https://github.com/mdaiter/cudaLATCH/blob/0a6a6285790f13559696bc54df3b23fa5a0b12b3/LatchClassifierOpenMVG.cpp

Affine Invariant points: (previous commit - looking to see where I left it)

pmoulon commented 8 years ago

Perhaps using the patch at the correct scale could improve the results. We can continue this discussion by mail if you want.

mdaiter commented 8 years ago

@pmoulon what's your mail address? Mine's mdaiter8121@gmail.com