jwyang / graph-rcnn.pytorch

[ECCV 2018] Official code for "Graph R-CNN for Scene Graph Generation"
731 stars 157 forks source link

graph-rcnn.pytorch

Pytorch code for our ECCV 2018 paper "Graph R-CNN for Scene Graph Generation"

Introduction

This project is a set of reimplemented representative scene graph generation models based on Pytorch 1.0, including:

Our reimplementations are based on the following repositories:

Why we need this repository?

The goal of gathering all these representative methods into a single repo is to establish a more fair comparison across different methods under the same settings. As you may notice in recent literatures, the reported numbers for IMP, MSDN, Graph R-CNN and Neural Motifs are usually confusing, especially due to the big gap between IMP style methods (first three) and Neural Motifs-style methods (neural motifs paper and other variants built on it). We hope this repo can establish a good benchmark for various scene graph generation methods, and contribute to the research community!

Checklist

Benchmarking

Object Detection

source backbone model bs lr lr_decay mAP@0.5 mAP@0.50:0.95
this repo Res-101 faster r-cnn 6 5e-3 70k,90k 24.8 12.8

Scene Graph Generation (Frequency Prior Only)

source backbone model bs lr lr_decay sgdet@20 sgdet@50 sgdet@100
this repo Res-101 freq 6 5e-3 70k,90k 19.4 25.0 28.5
motifnet VGG-16 freq - - - 17.7 23.5 27.6
<!-- Resnet-101 freq-overlap 6 5e-3 (70k, 90k) 100k - - - -->

* freq = frequency prior baseline

Scene Graph Generation (Joint training)

source backbone model bs lr lr_decay sgdet@20 sgdet@50 sgdet@100
this repo Res-101 vanilla 6 5e-3 70k,90k 10.4 14.3 16.8
<!---this repo Res-101 freq 6 5e-3 70k,90k 100k 19.4 25.0 28.5-->

Scene Graph Generation (Step training)

source backbone model bs lr mAP@0.5 sgdet@20 sgdet@50 sgdet@100
this repo Res-101 vanilla 8 5e-3 24.2 10.5 13.8 16.1
this repo Res-101 imp 8 5e-3 24.2 16.7 21.7 25.2
motifnet VGG-16 imp - - - 14.6 20.7 24.5
<!--this repo Res-101 msdn 8 5e-3 20k,30k - - - - -->
<!--this repo Res-101 grcnn 8 5e-3 20k,30k - - - - -->

* you can click 'this repo' in above table to download the checkpoints.

The above table shows that our reimplementation of baseline and imp algorithm match the performance reported in mofitnet.

Comparisons with other Methods

model bs lr mAP@0.5 sgdet@20 sgdet@50 sgdet@100
vanilla 8 5e-3 24.2 10.5 13.8 16.1
imp 8 5e-3 24.2 16.7 21.7 25.2
msdn 8 5e-3 24.2 18.3 23.6 27.1
graph-rcnn(no att) 8 5e-3 24.2 18.8 23.7 26.2

* you can click 'model' in above table to download the checkpoints.

Accordingly, all models achieved significantly better numbers compared with those reported in the original papers. The main reason for these consistant improvements are due to the per-class NMS of object proposals before sending to relationship head. Also, we found the gap between different methods are also reduced significantly. Our model has similar performance to msdn, while better performance than imp.

Adding RelPN to other Methods

We added our RelPN to various algorithms and compared with the original version.

model relpn bs lr mAP@0.5 sgdet@20 sgdet@50 sgdet@100
vanilla no 8 5e-3 24.2 10.5 13.8 16.1
vanilla yes 8 5e-3 24.2 12.3 15.8 17.7
imp no 8 5e-3 24.2 16.7 21.7 25.2
imp yes 8 5e-3 24.2 19.2 23.9 26.3
msdn no 8 5e-3 24.2 18.3 23.6 27.1
msdn yes 8 5e-3 24.2 19.2 23.8 26.2

* you can click 'model' in above table to download the checkpoints.

Above, we can see consistant improvements for different algorithms, which demonstrates the effeciveness of our proposed relation proposal network (RelPN).

Also, since much less object pairs (256, originally > 1k) are fed to relation head for predicate classification, the inference time for the models with RelPN is reduced significantly (~2.5 times faster)

Tips and Tricks

Some important observations based on the experiments:

Installation

Prerequisites

Dependencies

Install all the python dependencies using pip:

pip install -r requirements.txt

and libraries using apt-get:

apt-get update
apt-get install libglib2.0-0
apt-get install libsm6

Data Preparation

Annotations Object Predicate
#Categories 150 50

First, make a folder in the root folder:

mkdir -p datasets/vg_bm

Here, the suffix 'bm' is in short of "benchmark" representing the dataset for benchmarking. We may have other format of vg dataset in the future, e.g., more categories.

Then, download the data and preprocess the data according following this repo. Specifically, after downloading the visual genome dataset, you can follow this guidelines to get the following files:

datasets/vg_bm/imdb_1024.h5
datasets/vg_bm/bbox_distribution.npy
datasets/vg_bm/proposals.h5
datasets/vg_bm/VG-SGG-dicts.json
datasets/vg_bm/VG-SGG.h5

The above files will provide all the data needed for training the object detection models and scene graph generation models listed above.

Annotations Object Attribute Predicate
#Categories 1600 400 20

Soon, I will add this data loader to train bottom-up and top-down model on more object/predicate/attribute categories.

Annotations Object Attribute Predicate
#Categories 2500 ~600 ~400

This data loader further increase the number of categories for training more fine-grained visual representations.

Compilation

Compile the cuda dependencies using the following commands:

cd lib/scene_parser/rcnn
python setup.py build develop

After that, you should see all the necessary components, including nms, roi_pool, roi_align are compiled successfully.

Train

Train object detection model:

Multi-GPU training:

python -m torch.distributed.launch --nproc_per_node=$NGPUS main.py --config-file configs/faster_rcnn_res101.yaml

where NGPUS is the number of gpus available.

Train scene graph generation model jointly (train detector and sgg as a whole):

Multi-GPU training:

python -m torch.distributed.launch --nproc_per_node=$NGPUS main.py --config-file configs/sgg_res101_joint.yaml --algorithm $ALGORITHM

where NGPUS is the number of gpus available. ALGORIHM is the scene graph generation model name.

Train scene graph generation model stepwise (train detector first, and then sgg):

Multi-GPU training:

python -m torch.distributed.launch --nproc_per_node=$NGPUS main.py --config-file configs/sgg_res101_step.yaml --algorithm $ALGORITHM

where NGPUS is the number of gpus available. ALGORIHM is the scene graph generation model name.

Evaluate

Evaluate object detection model:

:warning: If you want to evaluate the model at your own path, just need to change the MODEL.WEIGHT_DET to your own path in faster_rcnn_res101.yaml.

Evaluate scene graph frequency baseline model:

In this case, you do not need any sgg model checkpoints. To get the evaluation result, object detector is enough. Run the following command:

python main.py --config-file configs/sgg_res101_{joint/step}.yaml --inference --use_freq_prior

In the yaml file, please specify the path MODEL.WEIGHT_DET for your object detector.

Evaluate scene graph generation model:

Similarly you can also append the ''--inference $YOUR_NUMBER'' to perform partially evaluate.

:warning: If you want to evaluate the model at your own path, just need to change the MODEL.WEIGHT_SGG to your own path in sggres101{joint/step}.yaml.

Visualization

If you want to visualize some examples, you just simple append the command with:

--visualize

Citation

@inproceedings{yang2018graph,
    title={Graph r-cnn for scene graph generation},
    author={Yang, Jianwei and Lu, Jiasen and Lee, Stefan and Batra, Dhruv and Parikh, Devi},
    booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
    pages={670--685},
    year={2018}
}

Acknowledgement

We appreciate much the nicely organized code developed by maskrcnn-benchmark. Our codebase is built mostly based on it.