rpautrat / SuperPoint

Efficient neural feature detector and descriptor
MIT License
1.89k stars 417 forks source link

parameter issues #67

Closed shylockyuan closed 5 years ago

shylockyuan commented 5 years ago

Thank you for your advice. It works. I trained the superpoint. But the result of superpoint_hpatches-repeatability-i and superpoint_hpatches-repeatability-v is worse than magicpoint. And the result of FAST, Hrris and Shi is close to yours. I am not sure to repeat step2 or change nms in config(nms in superpoint is 8, others is 4 ). I am still confused about the parameter nms and the compare between noise and without noise. Q1: nms=4or8 Change nms in classical-detectors_repeatability.yaml , magic-point_repeatability.yaml, superpoint_hpatches.yaml, then python export_detections_repeatability.py configs/magic-point_repeatability.yaml magic-point_coco --export_name=magic-point_hpatches-repeatability-v. Is it true? Do I need to change nms in other configs and retrained superpoint? Q2: noise and without noise Set add_augmentation_to_test_set to true in magic-point_shapes.yaml and in classical-detectors_shapes.yaml . So do I need to retrain superpoint because it is based on magic-point_shapes.yaml? And I don’t know how to use the classical-detectors_shapes.yaml .

rpautrat commented 5 years ago

Q1: To compare the repeatability, I always used a nms of 4 (whether it was MagicPoint or SuperPoint). The only time when I used a nms of 8 was to compute the descriptors on HPatches, because the images were resized to a bigger size (480 x 640).

Q2: the parameter 'add_augmentation_to_test_set' is only used to evaluate the robustness of your algorithm in the presence of noise. So it should be set to false when you train MagicPoint.

To improve the repeatability, you can repeat step 2 as long as the repeatability keeps improving.

classical-detectors_shapes.yaml is used to evaluate the output of classical detector points on the synthetic shapes. It is used like this: python export_detections.py configs/classical-detectors_shapes.yaml classical-detector_synth --export_name <name of your export>.

shylockyuan commented 5 years ago

Hi! Q1:

image

So if I want to do the experiment like the paper , just change the value of nms(4 or 8) in classical-detectors_repeatability.yaml magic-point_repeatability.yaml superpoint_hpatches.yaml ?

Q2: I repeated the step 2 by python export_detections.py configs/magic-point_coco_export.yaml magic-point_synth --pred_only --batch_size=5 --export_name=magic-point_coco-export2 python experiment.py train configs/superpoint_coco.yaml superpoint_coco1 python export_descriptors.py configs/superpoint_hpatches.yaml superpoint_coco --export_name=superpoint_hpatches-v But the result is still not good like before. Images visualization is image image

  i v
SuperPoint 0.575 0.253
MagicPoint 0.663 0.394
FAST 0.576 0.407
Harris 0.625 0.474
Shi 0.630 0.407
Random 0.109 0.174
rpautrat commented 5 years ago

Q1: Technically, you would need to retrain MagicPoint and then SuperPoint twice (with a nms of 4 and 8) by changing the nms parameter in magic-point_shapes.yaml, magic-point_coco_export.yaml, magic-point_coco_train.yaml and superpoint_coco.yaml. But in practice I think that you can just reuse your training with a nms of 4 and change the parameter nms only when evaluating the repeatability (so in classical-detectors_repeatability.yaml magic-point_repeatability.yaml superpoint_hpatches.yaml). The difference should not be significant.

Q2: I am not sure that you repeated step 2 correctly. It would be more something like:

python export_detections.py configs/magic-point_coco_export.yaml magic-point_synth --pred_only --batch_size=5 --export_name=magic-point_coco-export1
# Change the parameter 'labels' in magic-point_coco_train.yaml to the folder magic-point_coco-export1
python experiment.py train configs/magic-point_coco_train.yaml magic-point_coco
python export_detections.py configs/magic-point_coco_export.yaml magic-point_coco --pred_only --batch_size=5 --export_name=magic-point_coco-export2
# Change the parameter 'labels' in superpoint_coco.yaml to the folder magic-point_coco-export2
python experiment.py train configs/superpoint_coco.yaml superpoint_coco
python export_descriptors.py configs/superpoint_hpatches.yaml superpoint_coco --export_name=superpoint_hpatches-v
shylockyuan commented 5 years ago

Hello, I tried your advice but it didn’t work. The detector_repeatability-v of superpoint is 0.238, detector_repeatability-i is 0.564 .But the repeatability of descriptors is like yours result. Is it because I train the MagicPoint by 1080, SuperPoint by 1080ti?

rpautrat commented 5 years ago

No, it shouldn't change anything. What is the log output when you trained SuperPoint? And the Tensorboard graphs?

shylockyuan commented 5 years ago

The log record many training sessions and I show you the last one . 2019-04-24 23:16:32.756697: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0 2019-04-24 23:16:32.756815: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 315 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) [04/24/2019 23:16:43 INFO] Start training 2019-04-24 23:16:52.088481: I tensorflow/core/kernels/cuda_solvers.cc:159] Creating CudaSolver handles for stream 0xfdbb930 [04/24/2019 23:17:37 INFO] Iter 0: loss 25.6379, precision 0.0039, recall 0.0094 [04/24/2019 23:39:03 INFO] Iter 5000: loss 7.0775, precision 0.0180, recall 0.0445 [04/25/2019 00:00:13 INFO] Iter 10000: loss 4.3278, precision 0.0251, recall 0.0615 [04/25/2019 00:21:01 INFO] Iter 15000: loss 2.7323, precision 0.0317, recall 0.0766 [04/25/2019 00:41:48 INFO] Iter 20000: loss 1.7573, precision 0.0380, recall 0.0782 [04/25/2019 01:02:35 INFO] Iter 25000: loss 1.4507, precision 0.0480, recall 0.0786 [04/25/2019 01:23:21 INFO] Iter 30000: loss 1.3757, precision 0.0484, recall 0.0819 [04/25/2019 01:44:08 INFO] Iter 35000: loss 1.2801, precision 0.0584, recall 0.0861 [04/25/2019 02:04:55 INFO] Iter 40000: loss 1.2106, precision 0.0609, recall 0.0884 [04/25/2019 02:25:42 INFO] Iter 45000: loss 1.4565, precision 0.0697, recall 0.0989 [04/25/2019 02:46:29 INFO] Iter 50000: loss 0.9614, precision 0.0671, recall 0.1000 [04/25/2019 03:07:17 INFO] Iter 55000: loss 1.2946, precision 0.0672, recall 0.1037 [04/25/2019 03:28:03 INFO] Iter 60000: loss 1.3685, precision 0.0775, recall 0.1095 [04/25/2019 03:48:51 INFO] Iter 65000: loss 1.5566, precision 0.0799, recall 0.1158 [04/25/2019 04:09:38 INFO] Iter 70000: loss 1.2059, precision 0.0825, recall 0.1208 [04/25/2019 04:30:24 INFO] Iter 75000: loss 1.4372, precision 0.0837, recall 0.1243 [04/25/2019 04:51:12 INFO] Iter 80000: loss 1.2720, precision 0.0935, recall 0.1300 [04/25/2019 05:11:57 INFO] Iter 85000: loss 1.3023, precision 0.0906, recall 0.1351 [04/25/2019 05:32:44 INFO] Iter 90000: loss 1.3739, precision 0.0918, recall 0.1365 [04/25/2019 05:53:31 INFO] Iter 95000: loss 0.9436, precision 0.0935, recall 0.1401 [04/25/2019 06:14:20 INFO] Iter 100000: loss 1.2997, precision 0.0984, recall 0.1458 [04/25/2019 06:35:07 INFO] Iter 105000: loss 1.1847, precision 0.0992, recall 0.1460 [04/25/2019 06:55:55 INFO] Iter 110000: loss 1.0246, precision 0.1102, recall 0.1487 [04/25/2019 07:16:43 INFO] Iter 115000: loss 1.1618, precision 0.1051, recall 0.1508 [04/25/2019 07:37:30 INFO] Iter 120000: loss 1.1273, precision 0.1104, recall 0.1505 [04/25/2019 07:58:17 INFO] Iter 125000: loss 1.0207, precision 0.1075, recall 0.1568 [04/25/2019 08:19:04 INFO] Iter 130000: loss 0.9976, precision 0.1131, recall 0.1563 [04/25/2019 08:39:50 INFO] Iter 135000: loss 1.1447, precision 0.1100, recall 0.1598 [04/25/2019 09:00:35 INFO] Iter 140000: loss 1.0432, precision 0.1096, recall 0.1623 [04/25/2019 09:21:22 INFO] Iter 145000: loss 1.3523, precision 0.1141, recall 0.1608 [04/25/2019 09:42:10 INFO] Iter 150000: loss 0.8406, precision 0.1158, recall 0.1655 [04/25/2019 10:02:56 INFO] Iter 155000: loss 1.0623, precision 0.1121, recall 0.1625 [04/25/2019 10:23:39 INFO] Training finished [04/25/2019 10:23:41 INFO] Saving checkpoint for iteration #160000

image

image

I may find the reason. I changed resize in superpoint_hapatches.yaml from [480,640] to [240,320].The result is a little better.But the number of point in superpoint is changed. I don't know how to solve it. And the result of superpoint descriptors _evaluation_on_hpatches is similar to your result.(resize[480,640],nms=8) image image

rpautrat commented 5 years ago

To have a fair comparison, you need to always have the same size of images (I think I used 240x320 to evaluate the repeatability) and the same number of points detected (I used 300 points). If you detect less than 300 points, it makes sense that the repeatability will be lower.

To increase the number of points detected, one way is to retrain and increase the number of points when using export_detections.py (increase parameter top_k and lower parameter detection_threshold). Or if you don't want to retrain, you can also evaluate the repeatability on 200 points instead of 300.

Another thing: I personally have a lower loss at the end of my training (0.5526) and a slightly better recall (above 0.17), so it might also explain the difference. You can try to train a bit longer and tune some parameters to get an improvement.