Closed JSP-ywu closed 2 years ago
Hi, the information stored in the .txt (keypoints.txt, descriptors.txt, global_features.txt) should match the content of the numpy arrays that are dumped into .kpt,.desc,.gfeat.
if you only have x,y values for your keypoints, it's totally fine, as we mention in the documentation, other values are optional (and never used right now)
the .kpt,.desc,.gfeat files are binary dump of the numpy arrays. When we write them like this, we loose information about the shape and type, and that's stored inside the txt file.
if you image keypoints is np.array of shape 5000, 2 with dtype=np.float32, then you write dtype float32, dsize 2 and the number of keypoints will be infered.
Thanks! I'll check it.
Is the number of keypoints per image important? I mean, if the number of keypoints is different for each images, is there a problem? And also, there are 2 localization file with colmap, kapture_colmap_localize.py and kapture_colmap_localize_sift.py. In my case, should I use the latter? The default is the formal in pipeline. I'm sorry to ask again..! @yocabon
Hi, no a different number of keypoints per image is not a problem.
kapture_colmap_localize.py is used with pre-extracted keypoints (matches must be pre-computed) kapture_colmap_localize_sift.py uses colmap to extract SIFT features, and matches them using colmap vocab tree matcher (the default colmap pipeline).
with pre-extracted features, even if it's SIFT, you should use kapture_colmap_localize.py (well in fact you should use https://github.com/naver/kapture-localization/blob/main/pipeline/kapture_pipeline_mapping.py followed by https://github.com/naver/kapture-localization/blob/main/pipeline/kapture_pipeline_localize.py , see how it's done in the tutorials)
In the paper you linked, we probably used https://github.com/naver/kapture-localization/blob/main/pipeline/kapture_pipeline_colmap_vocab_tree.py for SIFT [32]+vocab. tree (COLMAP [53]) and for global_features+SIFT we used it to extract colmap SIFT keypoints (skipped some steps), imported them to kapture and ran the other pipeline
Hi! I'm student from south korea! Firstly, thanks for your amazing work. I'm a beginner of localization :).. I want to obtain global feature and local feature using netVLAD + SIFT on dataset from here. I already read that kapture-localization only supports AP-GeM and R2D2 directly. In the case of global feature, I can obtain the global feature from official code of netVLAD, but I don't know how to apply it appropriately to the kapture format. I know there is a rule of format in kapture, however, what is the right feature format of netVLAD? And I used SIFT method from openCV. There is same problem with global feature. How to match the images with local feature extracted from SIFT using kapture-localization? Also I extracted the keypoints and descriptors, but the number of keypoints is smaller than the number of keypoint from R2D2. I want to reproduce the score from here.
I appologize that the questions are not clear.
I realized that I should write down the information on text file such as global_features.txt. Is there any problem if the dsize or type of feature is changed? For example, the dsize of feature from netVLAD is not same with AP-GeM and so on local feature(e.g. number of keypoints). What information should I put in the keypoint array obtained from SIFT? According to the example on here , keypoint array contains [x y scale orientation]. But I can't find the scale parameter from keypoint obtained from SIFT on openCV .
I'm not sure if this additional explanation helped you to understand well.