Closed LukeJia47 closed 10 months ago
Hi, the thresholds on the line score are reported in the paper on page 6. We used tresholds 0.98, 0.97, and 0.9 for LCNN, HAWP, and DeepHough, respectively.
I am sorry, maybe my expression is not very clear. I saw the code for evaluating LCNN in "evaluate_line_repeatability.py" as shown below: ` ### LCNN method lcnn_checkpoint_path = "./misc/lcnn_pretrained.pth.tar"
line_detector_lcnn = LineDetectorLCNN( lcnn_model_cfg_path, lcnn_checkpoint_path, device ) ` I thought you were executing this script to evaluate the LCNN metrics. However, the function "LineDetectorLCNN" does not set the line score threshold parameter. And I ran it and the result is much lower than the metrics in the paper.
Do you know what's wrong?
Thank you!
Hi, sorry I went on vacation and then forgot about this issue. I don't remember why the threshold is not set in the eval code online, but you can probably add this yourself very easily, by doing like in the original LCNN repo: https://github.com/zhou13/lcnn/blob/57524636bc4614a32beac1af3b31f66ded2122ae/demo.py#L124 Using the scores output by postprocess, you can filter out the lines and only keep the ones passing the threshold.
The eval code was the development code and has never been properly cleaned, so sorry if it is not documented nor tested recently. I can't bring additional support for this code now.
Hi, thank you very much for sharing the dev version of the metric evaluation code. I tried the code "evaluate_line_repeatability.py" to reproduce the evaluation results of various methods which in the paper(Table 1). However, I found that the LCNN and DeepHough method did not set the line score threshold parameter in the code. And the evaluation results of LCNN, DeepHough and HAWP are much lower than the results in the paper.
I'm not sure what's wrong. Do you konw what might be the problem?
Thank you!