Open unizard opened 6 years ago
Fig. 1 provides examples to illustrate the phenomenon of missing and superfluous characters in training an attention based text recognition model on the ground truth “DOVE#”. Here, ‘#’ represents the End-Of-Sequence (EOS) symbol, which is commonly used in attention-based methods [7, 25, 32]. In Fig. 1 (a) and (b), the model may recognize the inputs as “DVE#” and “DOOVE#” respectively, based on the output pd sequences. Comparing against the gt “DOVE#”, it is natural to say that the former misses an ‘O’ and the latter has a superfluous ‘O’.
is there implementation of this loss?
CVPR2018
Institute: Hikvision URL: https://arxiv.org/pdf/1805.03384.pdf Keyword: SceneTextRecognition, Attention based Encoder-Decoder Network Interest: 2
Summary
페이퍼 잘쎴군. #준식이인턴가는회사 #연구잘하네ㅋㅋ