unizard / AwesomeArxiv

89 stars 14 forks source link

[2018.05.10] Edit Probability for Scene Text Recognition #177

Open unizard opened 6 years ago

unizard commented 6 years ago

CVPR2018

Institute: Hikvision URL: https://arxiv.org/pdf/1805.03384.pdf Keyword: SceneTextRecognition, Attention based Encoder-Decoder Network Interest: 2

Summary

페이퍼 잘쎴군. #준식이인턴가는회사 #연구잘하네ㅋㅋ

unizard commented 6 years ago

image

Fig. 1 provides examples to illustrate the phenomenon of missing and superfluous characters in training an attention based text recognition model on the ground truth “DOVE#”. Here, ‘#’ represents the End-Of-Sequence (EOS) symbol, which is commonly used in attention-based methods [7, 25, 32]. In Fig. 1 (a) and (b), the model may recognize the inputs as “DVE#” and “DOOVE#” respectively, based on the output pd sequences. Comparing against the gt “DOVE#”, it is natural to say that the former misses an ‘O’ and the latter has a superfluous ‘O’.

raminrahimi6970 commented 1 year ago

is there implementation of this loss?