Open cinjon opened 9 years ago
There are no default models, but you can train one easily, either using some training data from real scanned images or artificial data generated using ocropus-linegen. We have used it for Devanagari and Greek script with a lot of success. Some researchers reported results on Arabic Handwriting recognition using OCRopus. I can help in running a basic model if you decide to train your own models.
Thanks so much Adnan!
Your help would be very appreciated. Can you point me to what you did with the Devanagari or Greek languages? We can also take this offline if you prefer.
You are welcome!
The only different thing we did with Devanagari is the text-line normalization. Instead of using the default ocropus line normalization, we used a different method. I think it would be better if we could talk off this platform. You can email me at adnan@cs.uni-kl.de.
Hi, Thanks for this wonderful project.
I am trying to test for Japanese text. As you know or not, Japanese characters looks like this. "日本語でFracturは亀の子文字という" Yes, there are over-20-edge characters. and Japanese uses around 5000 different characters. Which tuning parameters do I have to care? Rough suggestions are appreciated, I will try.
In ocropus-rtrain, I changed from repr to unicode. print " TRU:",unicode(transcript) print " ALN:",unicode(gta[:len(transcript)+5]) print " OUT:",unicode(pred[:len(transcript)+5])
You are great !!! My Mac is learning 2705 characters now. It's just like a kid, trying to read. Model data is over 50 MB.
octopus-rtrain creates codec( target character union ) by read_text + lstm.normalize_nfkc ocropy.read_text() calls occupy.normalize_text() which calls unicodedata.normalize('NFC',s) from inside. lstm.normalize_nfkc() calls unicodedata.normalize('NFKC',s)
During training loop, correct text(transcript) is loaded by ocrolib.read_text(base+".gt.txt") This transcript does not go through NFKC normalization.
Doesn't this cause any problem?
After 4 millions iteration with 2402 kinds of Japanese characters, it does not seem to converge. I'll try c++ version.
How big was your dataset?
I generated 2000 lines of random text(UTF8) from 2402 chars (official common usage characters). c++ version seems to be running without any modification.
For Chinese characters, you probably need a much larger number of hidden units, and possibly some other tricks as well. Please share what you come up with.
@isaomatsunami Have you made any progress in training Japanese Character? I'm trying to train ocropy to recognize Chinese now.
No. I tried ocropy with hidden nodes of 200 and found, as far as I estimate, that it began to learn one char by forgetting another. I am training clstm against 3877 classes of Chinese/Japanese characters with hidden node = 800. After 150000 iteration, it keeps 3.8-5% error rate. See clstm section.
anything update about Chinese i have been read Adnan`s phd thesis,and I have 2 million documents (pdf or xps we can transfrom to jpeg) containing Chinese and English characters both ,need some help and tips about how to train a model do we need to specify the dpi of picture
Hi,
It would be interesting to see how LSTM would work on Chinese. Can you send me some sample pages?
Kind regards,
Adnan Ul-Hasan
On Sat, Apr 16, 2016 at 9:06 PM -0700, "wanghaisheng" notifications@github.com wrote:
anything update about Chinese i have been read Adnan`s phd thesis,and I have 2 million documents containing Chinese and English characters both ,need some help and tips about how to train a model
— You are receiving this because you commented. Reply to this email directly or view it on GitHub
@adnanulhasan
you can touch me here edwin_uestc@163.com
@isaomatsunami sir ,how do you get all your Ground Truth data ? i am using https://github.com/tmbdev/ocropy/wiki/Working-with-Ground-Truth way right now,but i want to use existing character to generate
Hi guys I am working on a indic language telugu model, I struck at this point I just want to train it with telugu charecter set, but the ocropus-rtrain loading all charecters,digits,and all how i even created a telugu='' variable in ocrlib/chars.py but not succeded. Please help me
Hi, Training ocropy for Telugu should be straight forward. You can use -c parameter to include the characters from from GT text files.
Hi @adnanulhasan thanks for your response, I'm. Trying command as Ocorpus-rtrain -o te book/0001/010000.bin.png -c telugucharectars But it's not working
Give the path to gt.txt files instead of mentioning telugucharacters. -c book/0001/010000.gt.txt
@adnanulhasan thanks dude, Some time trackback error coming during training Is it still open?
@adnanulhasan One of the papers from your groups, mentions the availability of a a ground truth devanagari database called 'Dev-DB'. Is there a possibility you can link me to it?
@adnanulhasan If I want to train an Arabic model, do you suggest using ocropy or clstm? what changes should I do to ocropy, char.py?
Is there support for non-latin languages like Chinese, Japanese or Thai?