ySalaun / LineSfM

131 stars 46 forks source link

Line Segment Matching Error #5

Closed haopo2005 closed 6 years ago

haopo2005 commented 6 years ago

Hi, I've found the problem about the issue #4 . The result of line matching is terrible. There unexpectedly exist corresponding line pairs between two irrelevant images. r0010048_3_r0010049_1_matches_lines_vps r0010049_2_r0010040_4_matches_lines

There are some warnings during line match stage, and missing line match files during calibration _20180417234500 _20180417234507

Is Line Band Descriptor not as robust as sift point descriptor? Or the threshold of ransac stage in matching is not correct?

ySalaun commented 6 years ago

Hi,

I have also observed this issue with different images. It is difficult to get rid of it. What I did is to only compute matching between consecutive pictures. However this solution only works if you know that the pictures are in the good order and if there is in fact a good order.

If you cannot use this solution, I can advise you to use one of these solutions that might work:

I am sorry I cannot give you a good solution but I think one of the current biggest difficulty of SfM with lines is in fact the line matching...

Best,

Yohann

haopo2005 commented 6 years ago

Hi, I cant find the relationship between VanishingPoints Computation and Line Matching Computation in the main_line_matching.cpp. I think you just read or compute these vp points and let it go. Do you integrate the line match module with Lilian Zhang's code (https://github.com/mtamburrano/LBD_Descriptor)? These are too many thresholds for newbie to tune. And I will test other line matching algorithm later, for example, https://docs.opencv.org/3.4.0/df/dfa/tutorial_line_descriptor_main.html or https://github.com/kailigo/LineSegmentMatching (not efficient but really accurate)

And I'd like to know the internal structure of x_y_matches_line.txt so as to replace the line matching module and continue the calibration stage(compute relative camera pose).

Best regards, Jin

ySalaun commented 6 years ago

Hi,

Regarding VP, it is historical code I forgot to erase. I have tried to accept line matches only when the vanishing points agreed globally but it didn't work well so you can forget/erase this part.

The line matching code is from Lilian Zhang's code but not the one on github, the one on his website (which require a painful installation). I agree that the thresholds are numerous and thus tuning is difficult.

About the code on opencv it is supposed to be LBD with LSD but I got far worse results than with Lilian Zhang's code whereas mine and Zhang's one are close (since detection is different, the results cannot be the same but the matching part is copy-pasted code and conversion to opencv lib).

About https://github.com/kailigo/LineSegmentMatching, if you have already tested it on your dataset and it's working I think that it would be the best solution. But beware, usual datasets used in line matching papers are far easier than the dataset you have so you need to test it before :)

About the file x_y_matches_line.txt it is just a simple txt files with :

Best,

Yohann

haopo2005 commented 6 years ago

Thanks for your advice about opencv. I'd like to just exclude the irrelevant images pairs and get the nice inlier of matched image lines. As for "validate the calibration hypothesis", I think it's the basic pipeline of ransac and you should have already implemented it in the main_calibration.cpp, have you? Currently, I need to fix the missing matching file due to the failure of computation of principle adjacency matrix. That may make me stuck at the index out of range at this line: matches_lines.insert(PictureMatches(imPair, readMatches(dirPath, picName[i], picName[j], LINE)));

ySalaun commented 6 years ago

For the "validate calibration hypothesis" I don't really have implemented it. You can add to the pipeline a condition of the form: if(finalNFA > 0) then (reject solution) otherwise (keep it) The 0 threshold is the one usually used in a contrario methods but may be this threshold could be tuned. To know if it can work, you just have to display the final NFA for every image pairs and check if for bad pairs the nfa is above a given value and for good pairs it is below it.

About your matching file error, I don't really understand what's happening. The matching fails ? The match reading fails ?

Best,

Yohann

haopo2005 commented 6 years ago

Hi, As for 'validate calibration hypothesis', I think the easiest way for me is to compare the number of inlier of HAC_RANSAC.computeRelativePose stage with some kind of threshold. As my understanding from your paper, a contrario approach in ransac is to select the scale of lowest NFA. and the final nfa is a chain of dot product of coplanarity and trifocal constraints nfa. It's too complicated in the computeRelativePose function. I cant follow the code with the paper.

Besides, your paper says,'Line-based calibration is thus prone to be less accurate in practice than point-based calibration, and even less when two lines are involved in a feature, as in line coplanarity'. Does it mean that if there are enough feature points, I should choose the feature points first to get the relative pose rather than the line features?

haopo2005 commented 6 years ago

I've tried different inlier thresholds and lbd thresholds. It is really difficult to get rid of the mis-match problem. There always exist false positive or false negative matching pairs. Maybe it's wrong to deal with the static irrelevant images. There are matching poinst in nature whatever alogrithm to use. The choice of input images should be a dynamic and incremental pipeline.

ySalaun commented 6 years ago

Hi,

Sorry for the late reply, I was in holidays.

About the NFA threshold, you just have to look at the variable minNFA in computeRelativePose function in hybrid_essential.cpp file. If it is higher in wrong cases than in good cases, then you can use a threshold on this value to know if went well or not.

About the point is better than lines. In fact, what we observed is that in the cases where many points are detected (e.g. > 1000), lines are not useful and can even decrease (a bit) the result accuracy. However, in your cases, it seems that there are too few points to obtain good results with points only.

I agree that this issue is difficult to correct, another possibility would be to use graph based algorithm (that are usually used in SfM methods). The idea is to accept every calibration information, then build a graph of relations between every cameras (mainly with rotation info because you don't have the translation scale info) and find the outliers. I didn't implement this part on the code but it is in openMVG for example.

Best,

Yohann