This is the code of "Industrial Scene Text Detection with Refined Feature-attentive Network". For more details, please refer to our TCSVT paper or Poster.
Check INSTALL.md for installation instructions.
Update datset root path in$RFN_ROOT/train.py
.
Process dataset can be set in $RFN_ROOT/tools/datagen.py
.
Modify test path in $RFN_ROOT/multi_image_test_ocr.py
.
Modify some settings in $RFN_ROOT/tools/encoder.py
, including anchor_areas, aspect_ratios.
# refer to /data_process/Compute aspect_ratios and area_ratios.py
For example the setting of MPSC as follows:
self.anchor_areas = [16*16., 32*32., 64*64., 128*128., 256*256, 512*512.]
self.aspect_ratios = [1., 2., 3., 5., 1./2., 1./3., 1./5.,7.]
# create your data cache directory
cd RFN_ROOT
# Download pretrained ResNet50 model(https://data.lip6.fr/cadene/pretrainedmodels/se_resnet50-ce0d4300.pth)
# Init RFN with pretrained ResNet50 model
python ./tools/get_state_dict.py
python train.py --config_file=./configs/R_50_C4_1x_train.yaml
The training size is set to a multiple of 128.
Multi-GPU phase is not testing yet, be careful to use GPU more than 1.
$RFN_ROOT/multi_image_test_ocr.py
and $RFN_ROOT/test/
### test each image
python test.py --dataset=MPSC --config_file=./configs/R_50_C4_1x_train.yaml --test
### eval result
python test.py --dataset=MPSC --eval
Pretrain SynthMPSC : https://pan.baidu.com/s/1BI2T4ncowKu908dcd9tT7g (0ke0)
Pretrain SynthText : https://pan.baidu.com/s/1IwALX0LrQewsk9Rf5cK1Dw (6dzr)
If you find our method useful for your reserach, please cite
@ARTICLE{9726175,
author={Guan, Tongkun and Gu, Chaochen and Lu, Changsheng and Tu, Jingzheng and Feng, Qi and Wu, Kaijie and Guan, Xinping},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
title={Industrial Scene Text Detection With Refined Feature-Attentive Network},
year={2022},
volume={32},
number={9},
pages={6073-6085},
doi={10.1109/TCSVT.2022.3156390}}
- This code are only free for academic research purposes and licensed under the 2-clause BSD License - see the LICENSE file for details.