Closed kapitsa2811 closed 4 years ago
The original CRAFT model was trained using a weakly supervised for which source code was unavailable. The weakly supervised approach allows them to use much larger datasets because it doesn't require character-level labels. Please see the paper for more information. We have an open issue for writing the weakly supervised version implementation (https://github.com/faustomorales/keras-ocr/issues/40) but I'm not sure if/when I will be able to get to it.
All that said, when you fine tune a detector for a specific task, I would expect the fine-tuned detector to function better for that task than the general CRAFT model trained in a weakly supervised manner.
Since this is not a bug, I'm closing this issue for the moment. Thanks!
@faustomorales, Thanks for great work.
I have question regarding fine tuning of a detection model. As per repository keras-ocr internally using CRAFT model to localise words. As per my understanding during fine tuning, weights of original CRAFT model will get updated /fine tuned (Correct me if I am wrong). You have already provided example (https://keras-ocr.readthedocs.io/en/latest/examples/fine_tuning_detector.html), In this case "ICDAR 2013" data set is used for training, So after it, Can I expect new model will be having better (or equal) localization accuracy compared to original CRAFT model ?
Thanks in Advance.