Here we present the source codes for GRIT. GRIT is an end-to-end model designed for social relation inference. Our paper has been accepted to Neural Networks! The paper can be found here).
The model weights can be downloaded from Google Drive or BaiduYunPan[password: 8731]
-- python version is 3.6.9
-- torch version is 1.10.0
To train GRIT-SW224 on PISC_Fine dataset, run run_main.sh or the following
python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 8730 main_ddp.py --output_dir=Transformer_output --img_size=224 --dataset=pisc_fine --backbone=swin_transformer
To train GRIT-SW384 on PISC_Fine dataset, run the following
python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 8730 main_ddp.py --output_dir=Transformer_output --img_size=384 --dataset=pisc_fine --backbone=swin_transformer
To train GRIT-R101 on PISC_Fine dataset, run the following
python -m torch.distributed.run --standalone --nnodes=1 --nproc_per_node=8 --master_port 8730 main_ddp.py --output_dir=Transformer_output --img_size=448 --dataset=pisc_fine --backbone=resnet101
To evaluate the trained GRIT-SW224 model on PISC_Fine dataset, run the following
python main_eval.py --model_path=path_to_GRIT-SW224 --dataset=pisc_fine --img_size=224
To evaluate the trained GRIT-SW384 model on PISC_Fine dataset, run the following
python main_eval.py --model_path=path_to_GRIT-SW384 --dataset=pisc_fine --img_size=384
To visualize the attention map from transformer's decoder, run the following
python visualization.py --model_path=path_to_GRIT-SW224 --dataset=pisc_fine --img_size=224
If this work is helpful, please kindly cite as:
@article{YU2024106216,
title = {Graph-based social relation inference with multi-level conditional attention},
journal = {Neural Networks},
volume = {173},
pages = {106216},
year = {2024},
issn = {0893-6080},
doi = {https://doi.org/10.1016/j.neunet.2024.106216},
url = {https://www.sciencedirect.com/science/article/pii/S0893608024001400},
author = {Xiaotian Yu and Hanling Yi and Qie Tang and Kun Huang and Wenze Hu and Shiliang Zhang and Xiaoyu Wang}
}