-
Hello, thanks for your paper! I just start to learn knowledge distillation, I want to know how to get the loss of global descriptor, does it works like get a vlad result in the student model and get…
-
Hi @antoine77340 ,
I couldn't help notice that in your implementation of NetVLAD, you dropped the biases for the conv layer and only consider the multiplication with the weights, especially on that…
-
Hi,
First, thank you for your contribution and congratulations!
I would have a question here, concerning Table 1 and Table 2 in your paper.
I think the true STOA methods are missing here. Pl…
-
I see the result in the task1'code:
Example: NetVLAD on SoccerNet v2 (17 classes - Average-mAP=31.37%)),whick is consistent with the result of my experiment.
But in the paper, from table 2, the resu…
-
Hi, thanks for your great work!
I want to ask that how I should organize feature and match files for algorithms like [NCNet](https://papers.nips.cc/paper/2018/file/8f7d807e1f53eff5f9efbe5cb81090fb-…
-
Dear Author: in this page
https://github.com/ethz-asl/hfnet/blob/master/doc/datasets.md
google_landmarks/
├── images/
├── global_descriptors/
└── superpoint_predictions/
bdd/
├── dawn_images_vg…
-
Hi, thanks for your wonderful work on event representation. But I still can't understand how to get the fused event representation. That is to say, how the Eq. (3) is computed?
![image](https://user…
-
NeXtVLAD模型是第二届Youtube-8M视频理解竞赛中效果最好的单模型,在参数量小于80M的情况下,能得到高于0.87的GAP指标。该模型提供了一种将桢级别的视频特征转化并压缩成特征向量,以适用于大尺寸视频文件的分类的方法。其基本出发点是在NetVLAD模型的基础上,将高维度的特征先进行分组,通过引入attention机制聚合提取时间维度的信息,这样既可以获得较高的准确率,又可以使用更少的…
-
I planned to integrate more models (feature extractors) apart from vgg16 into your library and then saw in some of your code comments some artifacts about resnet. Are you planning to integrate those o…
-
Hi Sarlin,
I'm so exciting this implementation and want to use this approach on my own dataset, as I follow the Aachen dataset
I start to capture hundred of images from my room by Intel T265 cam…