whai362 / PSENet

Official Pytorch implementations of PSENet.
Apache License 2.0
1.17k stars 346 forks source link

the data of mlt2017 used in pretrained #38

Open ChChwang opened 5 years ago

ChChwang commented 5 years ago

the trained model trained on icdar2015 using pretrained model on mlt2017, can not detect chinese words. the pretrained trained models on mlt2017 didn't use the chinese datasets? which datasets of mlt2017 used in pretraining? thanks.

whai362 commented 5 years ago

The training set and val set for ICDAR 2017 MLT (http://rrc.cvc.uab.es/?ch=8).

ChChwang commented 5 years ago

The training set and val set for ICDAR 2017 MLT (http://rrc.cvc.uab.es/?ch=8).

thanks, and the img was enlarged 2 times of mlt17 when pretraining? or using origin images? thank you very much.

whai362 commented 5 years ago

We use multi scale training. The detail can be found in the new paper.

ChChwang commented 5 years ago

We use multi scale training. The detail can be found in the new paper.

the last paragraph of 4.4 mentioned that "We enlarge the original image by 2 times"? what does it mean? just for test process?

whai362 commented 5 years ago

yes, just for testing.