wangzheallen / vsad

this is the code release for ''Weakly Supervised PatchNets: Describing and Aggregating Local Patches for Scene Recognition''
MIT License
43 stars 17 forks source link
encoding feature scene-recognition

this is the release code for Weakly Supervised PatchNets: Describing and Aggregating Local Patches for Scene Recognition:

Weakly Supervised PatchNets: Describing and Aggregating Local Patches for Scene Recognition
Zhe Wang, Limin Wang, Yali Wang, Bowen Zhang, and Yu Qiao

The performance is as below:

acc MIT_indoor SUN397
mean: 78.5 63.5
VLAD: 83.9 70.1
FV 83.6 69.0
VSAD 84.9 71.7

Note: The encoding method based on our scene_patchenet feature surpass human performance on sun397(68.5%).

Feature

we released the concise and effective feature for MIT indoor feature, it is notated as hybrid_PatchNet+VSAD in the paper which obtains 86.1 accuracy. You can use it as baseline or as complementary feature for further study.

acc on MIT dimension storage
86.1 100*256*2*2 1.9G

Model

Our trained scene_patchnet and object_patchenet, the model is based on cudnn_v4, if your system is based on cudnn_v5, you can use the code below cudnn_v4 to cudnn_v5: https://github.com/yjxiong/caffe/blob/action_recog/python/bn_convert_style.py

acc Top5
Object_patchnet_on_ImageNet: 85.3
Scene_patchnet_on_Places205: 82.7

They both take 128 * 128 patches as input.

Code

Usage

1. Download code and model

2. Extract scene_net_feature and object_net_probability (extracting_feature_example.m, multi_crop.m)

3. VSAD encoding (vsad_encoding.m, vsad_encoding_example.m, mit_pca.mat, mit_vsad_codebook.mat, object_selection_256.mat)

Contact

Figure Plot for Reference

Alt text