Open serser opened 6 years ago
I think the patches are already extracted! You should read the paper or the matlab code "sift.m". You should describe the patch only. This dataset extracts the patches in its way and aims to compare the performance of different descriptors.
Thanks @appleleaves . Neither hpatches-release
nor hpatches-sequences-release
provide ground-truth interest points (corner points), but instead homography matrix and jitter/overlaps. Matlab code sift.m
reads in provided patches and calculates descriptors directly on patches. While I am testing a non-patch-based method so it is not what I am looking after. The paper says in Part 4 Images and Patches, under Patches paragraph: the patches are found by DoG, Hessian-Hessian and Harris-Laplace with IoU > 0.5 considered to discard duplicate regions.
I'll give a try. Thanks again.
@appleleaves is right,
the HPatches benchmark is intended only for descriptors. If you want to evaluate detectors, then you can use the full images for each sequences, that are available with respective homographies
https://github.com/hpatches/hpatches-dataset#full-image-sequences
Thanks, Vassileios! I've tried to to apply your proposed method to grab keypoints on full images. As I am testing a new deep descriptor Superpoint, which is not patch-based, they've used HPatches dataset but on full images with Mikolajczyk's evaluation method. It seems to be hard to compare with other deep methods reported in your paper, which should be necessary to do.
Yes for Superpoint, it needs to be the full image, so its not easy to compare with patch-based methods.
I am guessing there is no scale information in Superpoint for each detected keypoint, so there is no way to convert it to a patch right?
That is true. Superpoint architecture takes two images in and outputs heatmaps (probability) of keypoints and descriptors, which is also of the full image, lacking scale info. I am looking into this problem.
@serser you may try feed the patch and take "central" descriptor from the tensor. Alhough, I am not sure how good will it be
Hi,
I am testing other descriptors on HPatches. But as you discussed, the patches are already extracted using DoG and normalized to 65*65 pixels.
In this case, how can I generate my descriptors based on this patch? I referred to HPatches-Descriptorshttps://github.com/hpatches/hpatches-descriptors/blob/master/python/hpatches_extract.py ,
mi = np.mean(patch) sigma = np.std(patch) descr[i] = np.array([mi,sigma])
This part is supposed to be replaced by my own descriptor, right? But how is it possible to generate exactly 1 descriptor based on this patch? Besides, as far as I know, keypoint is required to extract descriptors, so here patch = keypoint?
Thanks in advance. I may ask some stupid questions, so any help would be appreciated!
Regards, Weibo.
patch = keypoint
@qiuweibo exactly. You can refer to my PR, where I explicitly tell OpenCV SIFT that keypoint is the center of the patch: https://github.com/hpatches/hpatches-benchmark/pull/34
Thanks Dmytro, just merged.
patch = keypoint
@qiuweibo exactly. You can refer to my PR, where I explicitly tell OpenCV SIFT that keypoint is the center of the patch:
34
Thanks so much for your quick reply and nice PR. I would check it out.
I want to test some deep learning descriptors as well (which has its own keypoint and descriptor extraction), but I am not sure if HPatches would be applicable. I will keep in touch with this.
Regards, Weibo.
@qiuweibo for cases, when deep nets are SuperPoint-style, meaning that detector and descriptor are too tied to use my tricks, just try HSequences, like in D2Net or KeyNet papers
Thanks Dmytro, just merged.
Hi,
I was thinking to using other image sequences (such as KITTI Dataset) to generate HPatches since my application is mostly on traffic. Is it possible to offer open source code on generating HPatches based on different image sequences?
Thanks in advance.
Regards, Weibo.
@qiuweibo for cases, when deep nets are SuperPoint-style, meaning that detector and descriptor are too tied to use my tricks, just try HSequences, like in D2Net or KeyNet papers
Yeah, I am struggling to find suitable datasets and evaluation pipeline for my project. My project is about detector and descriptor evaluation better on traffic images. The detector and descriptor could be SIFT, AKAZE, and some deep learning ones.
@qiuweibo if you also use (and evaluate) detector, patches will be usesless. Just use homography/depth and images, as done in HSequences. KITTI is very bad for this thing, IMHO, because it has very specific geometry relationship between images and some end-to-end or dense methods like flows (LK, FlowNet, etc) will be much more appropriate. Regarding traffic, I recommend you to take a look on them paper: http://openaccess.thecvf.com/content_eccv_2018_workshops/w30/html/Komorowski_Interest_point_detectors_stability_evaluation_on_ApolloScape_dataset_ECCVW_2018_paper.html
HSequences
Thanks for your swift reply! I just checked out the paper that is very new and innovative and it evaluated many descriptors that I am interested in.
But unfortunately the stereo dataset of Apollo Scape hasn't been released yet. Maybe I'll evaluate on HP full image sequences with respective homography.
https://github.com/hpatches/hpatches-dataset#full-image-sequences
Hi folks,
I downloaded HPatches package, as well as sift and orb descriptors (e.g.
sh download.sh descr sift
). It turned out that for each patch per sequence(image), there exists one and only one descriptor (e.g. fori_ajuntament
, there are 853 patches and 853 sift descriptors and the same for orb's). How could that be possible? What if my new key point detector finds no interest point no that specific patch? I had run through default opencv detectors as follows,Some detected (sometimes more that one) and other did not. How could we run evaluation on it?
Any clarification is appreciated.