Liubinggunzu / VLAD-VSA

VLAD-VSA: Cross-Domain Face Presentation Attack Detection with Vocabulary Separation and Adaptation
MIT License
13 stars 2 forks source link

请问为什么netvlad层的卷积参数初始化方式和原始netvlad定义不一样? #3

Closed sherlockers closed 2 years ago

Liubinggunzu commented 2 years ago

nn.init.orthogonal_ get better results than other initialization. Another difference from netvlad is that only self.conv.weight is used for assignment and residual calculation, while the self.centroid is not. This comply with original VLAD that same visual words are used for assignment and residual calculation

sherlockers commented 2 years ago

非常感谢您的回答