git clone https://github.com/backseason/PoolNet.git
cd PoolNet/
Download the following datasets and unzip them into data
folder.
data/msrab_hkuis/msrab_hkuis_train_no_small.lst
.data/DUTS/DUTS-TR/train_pair.lst
../data/HED-BSDS_PASCAL/bsds_pascal_train_pair_r_val_r_small.lst
.Download the following pre-trained models GoogleDrive | BaiduYun (pwd: 27p5) into dataset/pretrained
folder.
Set the --train_root
and --train_list
path in train.sh
correctly.
We demo using ResNet-50 as network backbone and train with a initial lr of 5e-5 for 24 epoches, which is divided by 10 after 15 epochs.
./train.sh
We demo joint training with edge using ResNet-50 as network backbone and train with a initial lr of 5e-5 for 11 epoches, which is divided by 10 after 8 epochs. Each epoch runs for 30000 iters.
./joint_train.sh
After training the result model will be stored under results/run-*
folder.
For single dataset testing: *
changes accordingly and --sal_mode
indicates different datasets (details can be found in main.py
)
python main.py --mode='test' --model='results/run-*/models/final.pth' --test_fold='results/run-*-sal-e' --sal_mode='e'
For all datasets testing used in our paper: 2
indicates the gpu to use
./forward.sh 2 main.py results/run-*
For joint training, to get salient object detection results use
./forward.sh 2 joint_main.py results/run-*
to get edge detection results use
./forward_edge.sh 2 joint_main.py results/run-*
All results saliency maps will be stored under results/run-*-sal-*
folders in .png formats.
We provide the pre-trained model, pre-computed saliency maps and evaluation results for:
Note:
bath_size=1
If you have any questions, feel free to contact me via: j04.liu(at)gmail.com
.
@inproceedings{Liu2019PoolSal,
title={A Simple Pooling-Based Design for Real-Time Salient Object Detection},
author={Jiang-Jiang Liu and Qibin Hou and Ming-Ming Cheng and Jiashi Feng and Jianmin Jiang},
booktitle={IEEE CVPR},
year={2019},
}
Thanks to DSS and DSS-pytorch.