ocropus-archive / DUP-ocropy

Python-based tools for document analysis and OCR
Apache License 2.0
3.41k stars 590 forks source link

Other Languages #54

Open cinjon opened 9 years ago

cinjon commented 9 years ago

Is there support for non-latin languages like Chinese, Japanese or Thai?

adnanulhasan commented 9 years ago

There are no default models, but you can train one easily, either using some training data from real scanned images or artificial data generated using ocropus-linegen. We have used it for Devanagari and Greek script with a lot of success. Some researchers reported results on Arabic Handwriting recognition using OCRopus. I can help in running a basic model if you decide to train your own models.

cinjon commented 9 years ago

Thanks so much Adnan!

Your help would be very appreciated. Can you point me to what you did with the Devanagari or Greek languages? We can also take this offline if you prefer.

adnanulhasan commented 9 years ago

You are welcome!

The only different thing we did with Devanagari is the text-line normalization. Instead of using the default ocropus line normalization, we used a different method. I think it would be better if we could talk off this platform. You can email me at adnan@cs.uni-kl.de.

isaomatsunami commented 9 years ago

Hi, Thanks for this wonderful project.

I am trying to test for Japanese text. As you know or not, Japanese characters looks like this. "日本語でFracturは亀の子文字という" Yes, there are over-20-edge characters. and Japanese uses around 5000 different characters. Which tuning parameters do I have to care? Rough suggestions are appreciated, I will try.

isaomatsunami commented 8 years ago

In ocropus-rtrain, I changed from repr to unicode. print " TRU:",unicode(transcript) print " ALN:",unicode(gta[:len(transcript)+5]) print " OUT:",unicode(pred[:len(transcript)+5])

OCROPY learns Japanese

You are great !!! My Mac is learning 2705 characters now. It's just like a kid, trying to read. Model data is over 50 MB.

isaomatsunami commented 8 years ago

octopus-rtrain creates codec( target character union ) by read_text + lstm.normalize_nfkc ocropy.read_text() calls occupy.normalize_text() which calls unicodedata.normalize('NFC',s) from inside. lstm.normalize_nfkc() calls unicodedata.normalize('NFKC',s)

During training loop, correct text(transcript) is loaded by ocrolib.read_text(base+".gt.txt") This transcript does not go through NFKC normalization.

Doesn't this cause any problem?

isaomatsunami commented 8 years ago

After 4 millions iteration with 2402 kinds of Japanese characters, it does not seem to converge. I'll try c++ version.

cinjon commented 8 years ago

How big was your dataset?

isaomatsunami commented 8 years ago

I generated 2000 lines of random text(UTF8) from 2402 chars (official common usage characters). c++ version seems to be running without any modification.

tmbdev commented 8 years ago

For Chinese characters, you probably need a much larger number of hidden units, and possibly some other tricks as well. Please share what you come up with.

Halfish commented 8 years ago

@isaomatsunami Have you made any progress in training Japanese Character? I'm trying to train ocropy to recognize Chinese now.

isaomatsunami commented 8 years ago

No. I tried ocropy with hidden nodes of 200 and found, as far as I estimate, that it began to learn one char by forgetting another. I am training clstm against 3877 classes of Chinese/Japanese characters with hidden node = 800. After 150000 iteration, it keeps 3.8-5% error rate. See clstm section.

wanghaisheng commented 8 years ago

anything update about Chinese i have been read Adnan`s phd thesis,and I have 2 million documents (pdf or xps we can transfrom to jpeg) containing Chinese and English characters both ,need some help and tips about how to train a model do we need to specify the dpi of picture

adnanulhasan commented 8 years ago

Hi,

It would be interesting to see how LSTM would work on Chinese. Can you send me some sample pages?

Kind regards,

Adnan Ul-Hasan

On Sat, Apr 16, 2016 at 9:06 PM -0700, "wanghaisheng" notifications@github.com wrote:

anything update about Chinese i have been read Adnan`s phd thesis,and I have 2 million documents containing Chinese and English characters both ,need some help and tips about how to train a model

— You are receiving this because you commented. Reply to this email directly or view it on GitHub

wanghaisheng commented 8 years ago

@adnanulhasan
you can touch me here edwin_uestc@163.com

wanghaisheng commented 8 years ago

@isaomatsunami sir ,how do you get all your Ground Truth data ? i am using https://github.com/tmbdev/ocropy/wiki/Working-with-Ground-Truth way right now,but i want to use existing character to generate

harinath141 commented 7 years ago

Hi guys I am working on a indic language telugu model, I struck at this point I just want to train it with telugu charecter set, but the ocropus-rtrain loading all charecters,digits,and all how i even created a telugu='' variable in ocrlib/chars.py but not succeded. Please help me

adnanulhasan commented 7 years ago

Hi, Training ocropy for Telugu should be straight forward. You can use -c parameter to include the characters from from GT text files.

harinath141 commented 7 years ago

Hi @adnanulhasan thanks for your response, I'm. Trying command as Ocorpus-rtrain -o te book/0001/010000.bin.png -c telugucharectars But it's not working

adnanulhasan commented 7 years ago

Give the path to gt.txt files instead of mentioning telugucharacters. -c book/0001/010000.gt.txt

harinath141 commented 7 years ago

@adnanulhasan thanks dude, Some time trackback error coming during training Is it still open?

switchfootsid commented 7 years ago

@adnanulhasan One of the papers from your groups, mentions the availability of a a ground truth devanagari database called 'Dev-DB'. Is there a possibility you can link me to it?

ghost commented 7 years ago

@adnanulhasan If I want to train an Arabic model, do you suggest using ocropy or clstm? what changes should I do to ocropy, char.py?