Open heromanba opened 4 years ago
Any idea how to combine these two repositories to make it end to end now?
Not yet. I think end-to-end training is difficult. It might be possible to train these two models separately and cascade them together in inference. But another problem is how to train CRAFT on cropped text images.
Not yet. I think end-to-end training is difficult. It might be possible to train these two models separately and cascade them together in inference. But another problem is how to train CRAFT on cropped text images.
Thanks for your reply! By the way, I switched to another end-to-end repository https://github.com/MalongTech/research-charnet for my need. Check the end-to-end Text Recognition in this list https://github.com/hwalsuklee/awesome-deep-text-detection-recognition if you need it. : )
Thanks, it's very helpful. ^_^
Craft is available in pip .check
Not yet. I think end-to-end training is difficult. It might be possible to train these two models separately and cascade them together in inference. But another problem is how to train CRAFT on cropped text images.
Thanks for your reply! By the way, I switched to another end-to-end repository https://github.com/MalongTech/research-charnet for my need. Check the end-to-end Text Recognition in this list https://github.com/hwalsuklee/awesome-deep-text-detection-recognition if you need it. : )
The repo https://github.com/MalongTech/research-charnet doesn't produce the desired results.
Hi, thanks for sharing this great work. I noticed that on ICDAR2019 ArT results ranking table, there is a saying "Before text recognition, we used the text detector called CRAFT as a preprocessing step.". I am wondering how can you apply CRAFT as preprocessing? Do you train these two models jointly? Or just use CRAFT pretrained model's inference as preprocessing?