Download the dataset for VRD and put it in $ROOT/data
directory
Download and Extract features that are obtained by detectors and put them also in $ROOT/data
:
Overall structure
+-- data
+-- annotation_train.mat
+-- annotation_test.mat
+-- objectListN.mat
+-- predicate.mat
+-- images
| +-- sg_train_images
| +-- sg_test_images
+-- test
| +-- multi_vgg16_test_dict.pkl
+-- train
| +-- multi_vgg16_train_dict.pkl
+-- val
| +-- multi_vgg16_val_dict.pkl
Training
python train.py --data_root data/
Also check the available arguments inside options/
Testing
First generate the matlab format detection
python test.py --which_epoch union_best_sgd
Also check the available arguments inside options/
Then use the (official evaluation code)[https://github.com/Prof-Lu-Cewu/Visual-Relationship-Detection]