-
Dear Sir,
You have done a great job. The current Github includes ground truths for Nordland, Pittsburgh 30k, and Tokyo 24/7. It would be helpful for us if you could provide the datasets and the groun…
-
https://github.com/Nanne/pytorch-NetVlad/blob/8f7c37ba7a79a499dd0430ce3d3d5df40ea80581/main.py#L89
```
with h5py.File(train_set.cache, mode='w') as h5:
pool_size = encoder_dim
…
-
Hello,when I run the 'python main_scripts/evaluation.py --pca_outdim 4096 --resume model_path --img_shape 384 384 --trunc_te 8 --freeze_te 1 --arch cct384 --aggregation seqvlad --dataset_path /path/…
-
First of all, this work is fantastic.
Secondly, the paper has not mentioned any experiment using FiLM-Ensemble with different inputs (like point clouds, for example).
I'm currently working on unc…
-
Hi, I used kp2d as keypoint and local descriptor and netvlad as global descriptor in hloc for Aachen dataset. The result is very bad. Has anyone done visual location using the method ?
-
You mentioned that pytorch-NetVlad requires minimum version 0.4.0 of pytorch.Which version of pytorch is the highest supported by this program?
-
Thanks for your work.
For table II in paper, Intermediate Fusion, how to achieve fusion using the SeqNet? I guess that firstly using backbone (vgg-16/resnet-18/resnet-50/cct224/cct384) to get global …
-
Hello,
me again with some questions as I don't understand some implementation details.
When treating RGBD or StereoCamera, the keyframes are sent as KeyframeRGB message with just one rgb image gen…
-
Hi, Professor.
I am very interested in patch-netvlad code that you posted on github. I want to train the code by myself, but there are some problems that I hope you can help me. So, when I use the c…
-
` backbone = get_pretrained_torchvision_model(backbone_name)
if backbone_name.startswith("ResNet"):
for name, child in backbone.named_children():
if name == "layer3": # Fre…