rpautrat / SuperPoint

Efficient neural feature detector and descriptor
MIT License
1.87k stars 414 forks source link

Trying to obtain more feature points #150

Closed jiyun-xiao closed 4 years ago

jiyun-xiao commented 4 years ago

Hi,

Thank you for your very well-written repo! I am trying to have around 1000 features when I do inference with the superpoint model. But I always get exactly 600 features when I do inference with the superpoint model I trained.

Here are the efforts I made to try to obtain more feature points:

I set the top_k = 2000 in config/superpoint_coco.yaml, but the super-point model that I exported still give exactly 600 points for any image that I input into it. (The detection threshold is the default value 0.001)

I thought that I was not getting enough features from magicpoint model, so I double checked that top_k was commented out in config/magic-point_coco_train.yaml and config/magic-poing_coco_export.yaml, but the magic-point model that I exported give exactly 1000 points for any image that I input into it.

I thus looked into the code for anywhere that might have caused the number of features to be exactly 1000 or exactly 600, but I haven’t found it.

I’m wondering if you could shine some light on what the problem might be? Thank you very much for your help!

rpautrat commented 4 years ago

Hi,

What code are you using to get the keypoints? After inference with the model, are you directly using the 'pred' entry in the output? Or are you using the 'prob' (or 'prob_nms' if you want to apply NMS first) and then filtering it to keep the top_k points?

The 'pred' entry should indeed depend on the top_k and detection_threshold of the config file, but if the threshold is too high, you won't get enough keypoint (especially if the image sizes are small). I suggest doing the second option instead: use the 'prob_nms' entry and use a np.where(output['prob_nms'] > 0) to keep all the positive locations, and finally keep only the top_k best scores in it.

jiyun-xiao commented 4 years ago

Hi,

Thank you so much for your suggestion! I was indeed using prob_nms as the output, and I wasn't doing any filtering. But here is a demonstration of the effect that I was talking about. The code is the newly-pulled SuperPoint with no revision, other than print("kp1 length: " + str(len(kp1))) and print("kp2 length: " + str(len(kp2))) that I added in match_features_demo.py. The data is COCO_train2014. The model is trained with a top_k: 2000 in config/superpoint_coco.yaml. The command is: python3 match_features_demo.py sp_0520 /home/jiyunxiao/SuperPoint_dataset/COCO_coco_subset/train2014/COCO_train2014_000000000404.jpg /home/jiyunxiao/SuperPoint_dataset/COCO_coco_subset/train2014/COCO_train2014_000000000404.jpg

Screenshot from 2020-05-26 23-08-02 Screenshot from 2020-05-26 23-07-22 Screenshot from 2020-05-26 23-06-37 Screenshot from 2020-05-26 23-05-50 Screenshot from 2020-05-26 23-04-39 Screenshot from 2020-05-26 23-03-11

In each of these screenshots, the upper left window is the superpoint result visualization done by match_features_demo.py, the upper right window is the terminal output that is currently running match_feature_demo.py, the lower left window is the original picture, the lower right window is the code of match_features_demo.py. It shows the location where I added print("kp1 length: " + str(len(kp1))) and print("kp2 length: " + str(len(kp2))).

As you can see in the upper right window of all of 6 these screenshots, kp1 length: 600, kp2 length: 600 in all 6 screenshots. I think from looking at the picture, many of these might have more than 600 keypoints. And also, even if they indeed have only around 600 keypoints, it is a bit hard to believe that it is just coincidence that all 6 pictures have exactly 600 points. Thank you very much!

rpautrat commented 4 years ago

Hi, sorry for the late reply.

I indeed get the same result as you on my machine. The problem is that the model that I exported had the parameter top_k set to 600, so you could only have at most 600 keypoints when using this model. Sorry for that.

I now re-exported a new model, this time with no specified top_k. You should now be able to use as many keypoints as you want by pulling the latest changes.

jiyun-xiao commented 4 years ago

Hi rpautrat,

Thank you so much for the new model! But after running the new model using match_features_demo.py and COCO pictures, I still get exactly 600 keypoints on each of those pictures...I’m sorry about this...

rpautrat commented 4 years ago

Hi,

Are you sure you pulled the last changes of the master branch? On my machine I now get 1000 keypoints per image (default parameters). Whereas I was also having 600 keypoints like you before pushing the last change.

jiyun-xiao commented 4 years ago

Yes, I removed the previous version of SuperPoint and re-cloned the newest version, and uncompressed the pretrained_models/sp_v6 into EXPER_PATH/saved_models. Actually when I first run it (python3.6 match_features_demo.py sp_v6 /home/rog/SuperPoint_files/dataset/coco_subset/COCO_train2014_000000000034.jpg /home/rog/SuperPoint_files/dataset/coco_subset/COCO_train2014_000000000034.jpg), I encounter an error message:

Traceback (most recent call last): File "match_features_demo.py", line 115, in str(weights_dir)) File "/home/rog/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/loader_impl.py", line 216, in load saver = tf_saver.import_meta_graph(meta_graph_def_to_load, saver_kwargs) File "/home/rog/.local/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1909, in import_meta_graph kwargs) File "/home/rog/.local/lib/python3.6/site-packages/tensorflow/python/framework/meta_graph.py", line 737, in import_scoped_meta_graph producer_op_list=producer_op_list) File "/home/rog/.local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 432, in new_func return func(*args, **kwargs) File "/home/rog/.local/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 431, in import_graph_def _RemoveDefaultAttrs(op_dict, producer_op_list, graph_def) File "/home/rog/.local/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 211, in _RemoveDefaultAttrs op_def = op_dict[node.op] KeyError: 'NonMaxSuppressionV3'

And so I had to replace the saved_model.pb in sp_v6 with the saved_model.pb in the previous sp_v6, and then it ran successfully, but the length of kp1 and kp2 are both 600, as shown in the picture below (Left: comparison of SuperPoint matches and SIFT matches, Right: terminal output)

Screenshot from 2020-06-10 10-05-41 Screenshot from 2020-06-10 10-06-10 Screenshot from 2020-06-10 10-07-06

Thank you very much!

rpautrat commented 4 years ago

Hi, yes of course if you use the saved_model.pb of the old checkpoint you will still have the 600 keypoints.

The error you had with the new version is probably a Tensorflow compatibility issue. Which version of Tensorflow are you using? I exported and tested the new checkpoint with TF 1.14.

jiyun-xiao commented 4 years ago

Yes that seems to be the cause of the bug, my version of Tensorflow is TF 1.6, and it seems that TF1.6 can only work with NonMaxSuppressionV2. Thank you so much for your help!