rpautrat / SuperPoint

Efficient neural feature detector and descriptor
MIT License
1.9k stars 417 forks source link

repeatability on HPatches Dataset #24

Closed mfaisal59 closed 5 years ago

mfaisal59 commented 5 years ago

Hi, I computed repeatability score of pre-trained SuperPoint network using the repeatability code provided at https://github.com/rpautrat/SuperPoint/blob/master/superpoint/evaluations/detector_evaluation.py I am getting the following results:

hpatches

These results are not matching with the one reported in original paper. Can you help me to understand the results?

rpautrat commented 5 years ago

Yes I think it's because I modified the code of MagicPoint since the release of the pretrained model, so the pretrained model doesn't match the code anymore.

But I will release soon an updated model for MagicPoint, as well as a pretrained model for SuperPoint.

mfaisal59 commented 5 years ago

Thanks for a quick reply, just for the clarification, I am talking about pre-trained super point point provided by the original authors at https://github.com/MagicLeapResearch/SuperPointPretrainedNetwork I computed the repeatability on this model using your repeatability function. I want to verify do you get the same repeatability results on HPatches for the pre-trained network.

rpautrat commented 5 years ago

Ah sorry my mistake!

So for the pretrained model of the original authors, I get different results than yours (mine are on my Readme: https://github.com/rpautrat/SuperPoint): 0.641 for illumination and 0.379 for viewpoint changes.

But I see that in your output of compute_repeatability, the number of detections is not 300, but around 250 in illumination and even less for viewpoint. Since repeatability is quite dependent of the number of detections, it might explain why you get a lower score. You should get roughly always 300 points detected to be able to compare the repeatability.

How did you exactly use their pretrained model? Did you use their default parameters?

mfaisal59 commented 5 years ago

Okay, I got your point. Yes, I used the default parameters to extract the points. You are right repeatability is dependent on the number of detections. I will compute it again update you. Thanks.

rpautrat commented 5 years ago

Sure, let me know if you managed to get the right number of detections and better scores or not.

mfaisal59 commented 5 years ago

After using the 300 points I am getting these results.

hpaches_pretrained

rpautrat commented 5 years ago

Ok so now your results are closer to those I got with the pretrained model of MagicLeap (except for illumination where I got slightly better, 0.641, but this is probably due to the fact that we didn't use exactly the same parameters).

So the results are quite close to the original paper for illumination (0.652), but not quite for viewpoint (0.503). This is probably due to the fact that I apparently don't use exactly the same way to compute the repeatability as the authors (even though I tried to be as close to the paper as possible). As they didn't release their way of computing the repeatability, we can't compare unfortunately. But my metric should (hopefully) be rather close to theirs.

dada025 commented 5 years ago

@rpautrat Hi,I got a problem when I want to evaluate the repeatability on HPatches. wu@dada:~/SuperPoint/superpoint$ python3.6 export_detections_repeatability.py configs/magic-point_repeatability.yaml magic-point_coco --export_name=magic-point_hpatches-repeatability-v [12/08/2018 20:11:10 INFO] Number of GPUs detected: 1 Traceback (most recent call last): File "export_detections_repeatability.py", line 34, in with experiment._init_graph(config, with_dataset=True) as (net, dataset): File "/usr/local/lib/python3.6/contextlib.py", line 82, in enter return next(self.gen) File "/home/wu/SuperPoint/superpoint/experiment.py", line 70, in _init_graph dataset = get_dataset(config['data']['name'])(config['data']) File "/home/wu/SuperPoint/superpoint/datasets/base_dataset.py", line 102, in init self.dataset = self._init_dataset(self.config) File "/home/wu/SuperPoint/superpoint/datasets/patches_dataset.py", line 40, in _init_dataset homographies.append(np.loadtxt(str(Path(path, "H1" + str(i))))) File "/usr/local/lib/python3.6/site-packages/numpy/lib/npyio.py", line 926, in loadtxt fh = np.lib._datasource.open(fname, 'rt', encoding=encoding) File "/usr/local/lib/python3.6/site-packages/numpy/lib/_datasource.py", line 262, in open return ds.open(path, mode, encoding=encoding, newline=newline) File "/usr/local/lib/python3.6/site-packages/numpy/lib/_datasource.py", line 618, in open raise IOError("%s not found." % path) OSError: /home/wu/SuperPoint/superpoint/HPatches/hpatches-sequences-release/H_1_2 not found.

I think it may come from step 1 , 2 or 3,so I run visualize_synthetic-shapes to check step 1 . But the code couldn'd find the route of superpoint ,so I do some try to solve this problem. I write init.py but failed.

from superpoint import *

def get_superpoint(name): mod = import('SuperPoint.superpoint.{}'.format(name), fromlist=['']) return getattr(mod, _module_to_class(name)) def _module_toclass(name): return ''.join(n.capitalize() for n in name.split(''))

Could you give me some advice. Thanks!

rpautrat commented 5 years ago

Hi, apparently the error says that the file /home/wu/SuperPoint/superpoint/HPatches/hpatches-sequences-release/H_1_2 is not found. Are you sure of your path where you stored HPatches?

dada025 commented 5 years ago

@rpautrat Hi, I solve the last problem. But restoring parameters is always stop when the percentage is 58%. I tried train again but it didn't work. Could you give me some advice? Thanks a lot!

wu@dada:~/SuperPoint-master/superpoint$ python3.6 export_detections_repeatability.py configs/magic-point_repeatability.yaml magic-point_coco --export_name=magic-point_hpatches-repeatability-v [12/14/2018 10:38:50 INFO] Number of GPUs detected: 1 2018-12-14 10:38:52.316119: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2018-12-14 10:38:52.428307: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2018-12-14 10:38:52.429076: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 0 with properties: name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335 pciBusID: 0000:01:00.0 totalMemory: 7.92GiB freeMemory: 7.52GiB 2018-12-14 10:38:52.429092: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0 2018-12-14 10:38:52.604002: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7262 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1) [12/14/2018 10:38:52 INFO] Scale of 0 disables regularizer. [12/14/2018 10:38:52 INFO] Scale of 0 disables regularizer. [12/14/2018 10:38:52 INFO] Scale of 0 disables regularizer. [12/14/2018 10:38:52 INFO] Scale of 0 disables regularizer. [12/14/2018 10:38:52 INFO] Scale of 0 disables regularizer. [12/14/2018 10:38:52 INFO] Scale of 0 disables regularizer. [12/14/2018 10:38:52 INFO] Scale of 0 disables regularizer. [12/14/2018 10:38:52 INFO] Scale of 0 disables regularizer. [12/14/2018 10:38:52 INFO] Scale of 0 disables regularizer. [12/14/2018 10:38:52 INFO] Scale of 0 disables regularizer. 2018-12-14 10:38:53.015200: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0 2018-12-14 10:38:53.015356: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 155 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1) [12/14/2018 10:38:53 INFO] Restoring parameters from /home/wu/SuperPoint-master/superpoint/EXPER_DIR/magic-point_coco/model.ckpt-200000 58%|███████████████████████▏ | 580/1000 [03:50<01:58, 3.56it/s]

rpautrat commented 5 years ago

It stops at 580/1000 because there are only 580 images in the HPatches dataset. So everything is fine :)

The limit is set to 1000 in case you wanted to evaluate on another dataset and in that case you would only use the first 1000 images.

shylockyuan commented 5 years ago

Hello, I tried to evaluate detector repeatability on hpatches. It is strange that the code can find ‘utils‘ but can’t find ‘superpoint’ . Superpoint and utils are in the same place (/home/wu/SuperPoint-master) .

I tried to change the place of superpoint, just like (/home/wu/SuperPoint-master/notebooks), but it causes new problems.

Could you give me some advice? Thanks a lot!

import cv2 import numpy as np import matplotlib.pyplot as plt from pathlib import Path

from superpoint.settings import EXPER_PATH import superpoint.evaluations.detector_evaluation as ev from utils import plot_imgs %matplotlib inline %load_ext autoreload %autoreload 2

ModuleNotFoundError Traceback (most recent call last)

in 4 from pathlib import Path 5 ----> 6 from superpoint.settings import EXPER_PATH 7 import superpoint.evaluations.detector_evaluation as ev 8 from utils import plot_imgs ModuleNotFoundError: No module named 'superpoint' Images visualization for i in range(1): for e, thresh in zip(experiments, confidence_thresholds): path = Path(EXPER_PATH, "outputs", e, str(i) + ".npz") d = np.load(path) points1 = select_top_k(d['prob'], thresh=thresh) im1 = draw_keypoints(d['image'][..., 0], points1, (0, 255, 0)) / 255. points2 = select_top_k(d['warped_prob'], thresh=thresh) im2 = draw_keypoints(d['warped_image'], points2, (0, 255, 0)) / 255. plot_imgs([im1, im2], ylabel=e, dpi=200, cmap='gray', titles=[str(len(points1[0]))+' points', str(len(points2[0]))+' points']) --------------------------------------------------------------------------- NameError Traceback (most recent call last) in 1 for i in range(1): 2 for e, thresh in zip(experiments, confidence_thresholds): ----> 3 path = Path(EXPER_PATH, "outputs", e, str(i) + ".npz") 4 d = np.load(path) 5 NameError: name 'EXPER_PATH' is not defined Repeatability for exp in experiments: repeatability = ev.compute_repeatability(exp, keep_k_points=300, distance_thresh=3, verbose=False) print('> {}: {}'.format(exp, repeatability)) --------------------------------------------------------------------------- NameError Traceback (most recent call last) in 1 for exp in experiments: ----> 2 repeatability = ev.compute_repeatability(exp, keep_k_points=300, distance_thresh=3, verbose=False) 3 print('> {}: {}'.format(exp, repeatability)) NameError: name 'ev' is not defined
rpautrat commented 5 years ago

Hi! Have you installed the superpoint package correctly with make install? From where are you launching your script? If you launch it from superpoint/notebooks, it makes sense that utils.py is found (because it is in the same folder) without having necessarily the superpoint package working properly.

After having installed the module superpoint with make install it should be found, wherever you launch the script from. Also make sure that you use the same Python version when building the module and when running it.

shylockyuan commented 5 years ago

Hi! I installed the superpoint package with sudo make install in /home/wu/SuperPoint-master. Because ‘make install’ didn’t work and the error is :

Obtaining file:///home/wu/SuperPoint-master Complete output from command python setup.py egg_info: running egg_info writing superpoint.egg-info/PKG-INFO error: [Errno 13] Permission denied: 'superpoint.egg-info/PKG-INFO'

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in /home/wu/SuperPoint-master/ makefile:2: recipe for target 'install' failed make: *** [install] Error 1

I tried to launch it from SuperPoint-master and SuperPoint-master/notebooks. Both of the ways cause the same errors ——can’t find ‘superpoint’.

I first run the code in python 3.6.1. And the code can’t find ‘superpoint’ and ’tensorflow’. To solve the problem of ‘no mudule named tensorflow’ in jupyter, I used Anaconda.

I run the code in python 3.6.7 in Virtual environment which is created by Anaconda. And the version of python in terminal and jupyter are 3.6.7 . But the problem of ‘can’t find superpoint ’ still exits.

rpautrat commented 5 years ago

Ok, so the problem is that you compiled with your local python 3.6.1 and then ran the code with the python 3.6.7 of Anaconda.

First remove any files that could have been created when you first ran sudo make install (for example a folder called "superpoint.egg-info" in your Superpoint folder). Then activate your Anaconda environment with Python 3.6.7 and run again make install in the Superpoint folder (no sudo should be necessary at that point because you will be using the virtual environment). And then try to run the code again with Anaconda. It should work normally.

shylockyuan commented 5 years ago

Thank you for your help. The previous problem has been solved.

And then I ran the code ‘detector_repeatability_hpatches’ : Magic-point_hpatches-repeatability-v : 0.371 fast_hpatches-repeatability-v : 0.407 harris_hpatches-repeatability-v : 0.474 shi_hpatches-repeatability-v : 0.407 Magic-point_hpatches-repeatability-i : 0.653 fast_hpatches-repeatability-I : 0.576 harris_hpatches-repeatability-I : 0.630 shi_hpatches-repeatability-I : 0.583

Q1: How can I evaluate the magicLeap-repeatability-v?

Q2: I want to evaluate the repeatability of superpoint, can I try to train the network by: python experiment.py train configs/superpoint_coco.yaml superpoint_coco?

~/SuperPoint-master/superpoint$python experiment.py train configs/superpoint_coco.yaml superpoint_coco [12/18/2018 22:22:29 INFO] Running command TRAIN [12/18/2018 22:22:29 INFO] Number of GPUs detected: 1 Traceback (most recent call last): File "experiment.py", line 149, in args.func(config, output_dir, args) File "experiment.py", line 86, in _cli_train train(config, config['train_iter'], output_dir) File "experiment.py", line 21, in train with _init_graph(config) as net: File "/home/wu/anaconda3/envs/tensorflow-gpu/lib/python3.6/contextlib.py", line 81, in enter return next(self.gen) File "experiment.py", line 68, in _init_graph dataset = get_dataset(config['data']['name'])(config['data']) File "/home/wu/SuperPoint-master/superpoint/datasets/base_dataset.py", line 102, in init self.dataset = self._init_dataset(self.config) File "/home/wu/SuperPoint-master/superpoint/datasets/coco.py", line 53, in _init_dataset assert p.exists(), 'Image {} has no corresponding label {}'.format(n, p) AssertionError: Image COCO_train2014_000000041739 has no corresponding label /home/wu/SuperPoint-master/superpoint/EXPER_DIR/outputs/mp_synth-v7_ha1-100_trained_no-topk_coco/COCO_train2014_000000041739.npz

rpautrat commented 5 years ago

I'm glad to hear that your previous issue has been solved!

Q1: My initial plan to compare with the pretrained model of MagicLeap was to convert their Torch weights into Tensorflow ones and then export a superpoint model of my framework with their weights. But it turns out that I don't get very satisfactory results by doing this and I am not sure why. Probably because there was a slight difference somewhere between their implementation and mine.

So what I did in the end is a quick and dirty way (that is why I didn't put it online): I added the following lines in the file models/classical_detectors_descriptors.py (line 40):

    elif config['method'] == 'pretrained_magic_point':
        weights_path = '/cluster/home/pautratr/3d_project/SuperPointPretrainedNe\
twork/superpoint_v1.pth'  # adaptht his to your own path to the pretrained Torch weights of https://github.com/MagicLeapResearch/SuperPointPretrainedNetwork/blob/master/superpoint_v1.pth
        fe = SuperPointFrontend(weights_path=weights_path,
                                nms_dist=config['nms'],
                                conf_thresh=0.015,
                nn_thresh=0.7,
                                cuda=False)
    points, desc, detections = fe.run(im[:, :, 0] / 255.)
        points = points.astype(int)
        descriptors = np.zeros((im.shape[0], im.shape[1], 256), np.float)
        descriptors[points[1, :], points[0, :]] = np.transpose(desc)

    detections = detections.astype(np.float32)
    descriptors = descriptors.astype(np.float32)
    return (detections, descriptors)

This way you can use the pretrained MagicLeap Superpoint like any other classical baseline and do the evaluation on it.

Q2: The command you used is the right one. Are you sure that you have all the labels of the train COCO images in the folder /home/wu/SuperPoint-master/superpoint/EXPER_DIR/outputs/mp_synth-v7_ha1-100_trained_no-topk_coco? You can for example count the number of files in the two folders /home/wu/SuperPoint-master/superpoint/EXPER_DIR/outputs/mp_synth-v7_ha1-100_trained_no-topk_coco and /home/wu/SuperPoint-master/superpoint/DATA_DIR/COCO/train2014/ with the command ls <path to the folder> | wc -l (and the result should be 79771 in both cases).

shylockyuan commented 5 years ago

Sorry, I may not get your point. The problem  /home/wu/SuperPoint-master/superpoint/EXPER_DIR/outputs/mp_synth-v7_ha1-100_trained_no-topk_coco still exits. I tried copy /home/wu/SuperPoint-master/superpoint/DATA_DIR/COCO/train2014/ to /home/wu/SuperPoint-master/superpoint/EXPER_DIR/outputs/mp_synth-v7_ha1-100_trained_no-topk_coco. But it didn’t work. I tried to produce mp_synth-v7_ha1-100_trained_no-topk_coco. ~/SuperPoint-master/superpoint$ python generate_coco_patches.py train configs/coco_patches_generation.yaml coco-patches Traceback (most recent call last):   File "generate_coco_patches.py", line 9, in     from superpoint.models.utils import (sample_homography, flat2mat, ImportError: cannot import name 'sample_homography' So I think the problem is I don’t know how to produce mp_synth-v7_ha1-100_trained_no-topk_coco.Could you give me some advice? Thanks a lot.

rpautrat commented 5 years ago

The file generate_coco_patches.py is actually used when you want to generate patches similar to HPatches. This can then be used as a validation set, while HPatches is used as a test set.

But you don't need it to train superpoint. You need to first train magicpoint and export the detections on COCO (with export_detections.py), which will create the folder /home/wu/SuperPoint-master/superpoint/EXPER_DIR/outputs/mp_synth-v7_ha1-100_trained_no-topk_coco. Then you can train superpoint using the labels you just created. Everything is explained here: https://github.com/rpautrat/SuperPoint/tree/superpoint_v1

shaofengzeng commented 3 years ago

After using the 300 points I am getting these results.

hpaches_pretrained

After using the 300 points I am getting these results.

hpaches_pretrained

Excuse me, can you tell me how do you increase the number of keypoints ?

rpautrat commented 3 years ago

Hi,

In the config file exporting the detections to compute repeatability (e.g. https://github.com/rpautrat/SuperPoint/blob/master/superpoint/configs/magic-point_repeatability.yaml), you can lower the detection_threshold to detect more keypoints. You can also potentially increase top_k to a higher value (e.g. 2000), but the detection threshold should be the main parameter to increase the number of keypoints.

Then of course, if you want to compute repeatability with more than 300 points, you can also increase the parameter 'keep_k_points' of the compute_repeatability function in the notebook.

1962975362 commented 3 years ago

OSError: /home/wu/SuperPoint/superpoint/HPatches/hpatches-sequences-release/H_1_2 not found.

how to solve this?? OSError: /home/wu/SuperPoint/superpoint/HPatches/hpatches-sequences-release/H_1_2 not found.

rpautrat commented 3 years ago

Answered in https://github.com/rpautrat/SuperPoint/issues/221#issuecomment-878945803