Closed ws233 closed 7 years ago
According Ray cube is a dead-end and it can be removed soon from the code(see e.g. [1]), so you can not expect any progress...
[1] https://groups.google.com/d/msg/tesseract-dev/mtIJXoUpfJc/6f0EwVNXOM8J
@zdenop Since Cube is going away, perhaps this can be closed?
lets wait for Ray...
It's 'going away' for several years now... :-)
How Cube is being discontinued, it's training procedure has not been published to the public. Somehow I got the feeling that Cube was purposely sabotaged and hindered from the public.
The new LSTM based engine is here.
@theraysmith, I see that the Cube engine is still present in the code. Are you going to drop it in the final 4.0 release?
Actually I have a comment for this. There is one reason why cube has survived this long: For Hindi cube+tesseract has half the error rate of either on their own. I haven't actually tested that against the new LSTM engine yet, but I will on Monday, and if the new LSTM engine is better, then yes, cube is likely to get the chop for 4.00, and the ifdefs will be very useful.
@theraysmith
Since the hardware requirements for 4.0 are going to be higher than for 3.xx versions, it will be good to keep Hindi cube+tesseract version also available.
The accuracy results that you are mentioning for Hindi are for which version - 3.02, 3.03, 3.04 ?
Tests complete. Decision made. Cube is going away in 4.00. Results: Engine | Total char errors | Word Recall Errors | Word Precision Errors | Walltime | *CPUtime** |
---|---|---|---|---|---|
Tess 3.04 | 13.9 | 30 | 31.2 | 3.0 | 2.8 |
Cube | 15.1 | 29.5 | 30.7 | 3.4 | 3.1 |
Tess+Cube | 11.0 | 24.2 | 25.4 | 5.7 | 5.3 |
LSTM | 7.6 | 20.9 | 20.8 | 1.5 | 2.5 |
Note in the above table that LSTM is faster than Tess 3.04 (without adding cube) in both wall time and CPU time! For wall time by a factor of 2.
Can you provide some details about used hardware for test? Did you made test also on single core CPU to see difference?
And what about the language model used for the test? Is it already available so I can use it for my own tests?
OK, the big test I ran in a Google data center. I just ran a test on my machine (HP Z420) on a single Hindi page for comparison, ran each one 3 times (using time), and took the median. My machine has AVX, so that will have still speeded it up a bit, so I tried without AVX/SSE as well: I disabled OpenMP by adding #undef _OPENMP in functions.h, line 33, and disabled AVX/SSE in weightmatrix.cpp, line 66,67.
Test Mode Real User Default (-oem 3 = cube + tess) 7.6 7.3 Base Tess (-oem 0) 2.9 2.6 Cube (-oem 1) 5.4 4.9 LSTM With OpenMP+AVX 1.8 3.8 LSTM No OpenMP with AVX 2.7 2.4 LSTM No OpenMP with SSE 3.1 2.7 LSTM No OpenMP no SIMD at all 4.6 4.1
I think these tests nail cube as being slower and less accurate.
There may be a debate as to the value of the old Tesseract engine for its speed vs the new one for its accuracy.
I'm going to push the data files now.
On Mon, Nov 28, 2016 at 10:40 AM, Stefan Weil notifications@github.com wrote:
And what about the language model used for the test? Is it already available so I can use it for my own tests?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tesseract-ocr/tesseract/issues/40#issuecomment-263355535, or mute the thread https://github.com/notifications/unsubscribe-auth/AL056W2jegBHZM59ZxH-U5q22tSBy-HZks5rCyAxgaJpZM4FOBFi .
-- Ray.
I'm going to push the data files now.
Got the first ones. My first test with a simple screenshot gave significant better results with LSTM, but needed 16 minutes CPU time (instead of 9 seconds) with a debug build of Tesseract (-O0). A release build (-O2) needs 17 seconds with LSTM, 4 seconds without for the same image.
Are there also new data files planned for old German (deu_frak)? I was surprised that the default English model with LSTM could recognize some words.
On Mon, Nov 28, 2016 at 1:49 PM, Stefan Weil notifications@github.com wrote:
I'm going to push the data files now.
Got the first ones. My first test with a simple screenshot gave significant better results with LSTM, but needed 16 minutes CPU time (instead of 9 seconds) with a debug build of Tesseract (-O0). A release build (-O2) needs 17 seconds with LSTM, 4 seconds without for the same image.
The slow speed with debug is to be expected. The new code is much more memory intensive, so it is a lot slower on debug (also openmp is turned off by choice on debug). The optimized build speed sounds about right for Latin-based languages. It is the complex scripts that will run faster relative to base Tesseract.
Are there also new data files planned for old German (deu_frak)? I was surprised that the default English model with LSTM could recognize some words.
I don't think I generated the original deu_frak. I have the fonts to do so with LSTM, but I don't know if I have a decent amount of corpus data to hand. With English at least, the language was different in the days of Fraktur (Ye Olde shoppe). I know German continued to be written in Fraktur until the 1940s, so that might be easier. Or is there an old German that is analogous to Ye Old Shoppe for English?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tesseract-ocr/tesseract/issues/40#issuecomment-263405208, or mute the thread https://github.com/notifications/unsubscribe-auth/AL056Ti1gWSSG6BfuBbL68EE7RYfsItOks5rC0xWgaJpZM4FOBFi .
-- Ray.
Is there a 3.04 vs 4.0 branch in tessdata for the traineddata files?
Stefan, please share the binaries for 4.0 alpha for Windows. I am interested in trying the hindi and other indian languages traineddata. Thanks.
On 29-Nov-2016 5:18 AM, "theraysmith" notifications@github.com wrote:
On Mon, Nov 28, 2016 at 1:49 PM, Stefan Weil notifications@github.com wrote:
I'm going to push the data files now.
Got the first ones. My first test with a simple screenshot gave significant better results with LSTM, but needed 16 minutes CPU time (instead of 9 seconds) with a debug build of Tesseract (-O0). A release build (-O2) needs 17 seconds with LSTM, 4 seconds without for the same image.
The slow speed with debug is to be expected. The new code is much more memory intensive, so it is a lot slower on debug (also openmp is turned off by choice on debug). The optimized build speed sounds about right for Latin-based languages. It is the complex scripts that will run faster relative to base Tesseract.
Are there also new data files planned for old German (deu_frak)? I was surprised that the default English model with LSTM could recognize some words.
I don't think I generated the original deu_frak. I have the fonts to do so with LSTM, but I don't know if I have a decent amount of corpus data to hand. With English at least, the language was different in the days of Fraktur (Ye Olde shoppe). I know German continued to be written in Fraktur until the 1940s, so that might be easier. Or is there an old German that is analogous to Ye Old Shoppe for English?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tesseract-ocr/tesseract/issues/40# issuecomment-263405208, or mute the thread https://github.com/notifications/unsubscribe-auth/ AL056Ti1gWSSG6BfuBbL68EE7RYfsItOks5rC0xWgaJpZM4FOBFi .
-- Ray.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tesseract-ocr/tesseract/issues/40#issuecomment-263432042, or mute the thread https://github.com/notifications/unsubscribe-auth/AE2_ox9UBjscGCZrZM-bSZ-Eimw91bkXks5rC2g5gaJpZM4FOBFi .
I know German continued to be written in Fraktur until the 1940s, so that might be easier. Or is there an old German that is analogous to Ye Old Shoppe for English?
Fraktur was used for an important German newspaper (Reichsanzeiger) until 1945. I'd like to try some pages from that newspaper with Tesseract LSTM. Surprisingly even with the English data Tesseract was able to recognize at least some words written in Fraktur. Could you give me some hints how to create the data for deu_frak
?
There is an Old High German (similar to Old English), but the German translation of the New Testament by Martin Luther (1521) was one of the first major printed books in German, and basically it started the modern German language (High German) which is used until today.
I think it would be great to move this discussion to (developers) forum. we are already out scope of original issue post and much more people should be interested in "Faktur topic"...
Stefan, please share the binaries for 4.0 alpha for Windows.
@Shreeshrii, they are online now at the usual location. See also the related pull request #511. Please report results either in the developer forum as suggested by @zdenop or by personal mail to me.
Is there a 3.04 vs 4.0 branch in tessdata for the traineddata files?
Thanks, Amit. Please add the info to the wiki also, if you have not already done so.
On 29-Nov-2016 7:31 PM, "Amit D." notifications@github.com wrote:
Is there a 3.04 vs 4.0 branch in tessdata for the traineddata files?
https://github.com/tesseract-ocr/tessdata/tree/3.04.00
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tesseract-ocr/tesseract/issues/40#issuecomment-263577206, or mute the thread https://github.com/notifications/unsubscribe-auth/AE2_o3a5AJ3je8sWymCBCFk_0jhhM1d9ks5rDDAegaJpZM4FOBFi .
Thanks, I will give it a try and report back.
On 29-Nov-2016 7:30 PM, "Stefan Weil" notifications@github.com wrote:
Stefan, please share the binaries for 4.0 alpha for Windows.
@Shreeshrii https://github.com/Shreeshrii, they are online now at the usual location. Please report results either in the developer forum as suggested by @zdenop https://github.com/zdenop or by personal mail to me.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tesseract-ocr/tesseract/issues/40#issuecomment-263577080, or mute the thread https://github.com/notifications/unsubscribe-auth/AE2_o5J0hvvS7W2el0QDzXNO957qHvumks5rDDABgaJpZM4FOBFi .
Amit. Please add the info to the wiki also, if you have not already done so.
You can do it yourself... :)
@theraysmith @stweil
Thank you! I tested a few devanagari pages with the 4.0 alpha windows binaries and traineddata for Hindi, Sanskrit, Marathi and Nepali. This was on a Windows 10 netbook, intel atom 1.33 ghz cpu, x64 based processor, 32 bit os, 2 GB RAM. I tested only with single page images and there was no performance problem on this basic netbook. The accuracy is much improved in the LSTM version. This is by just eyeballing the output (not using any software for comparison).
From a user point of view, better accuracy maybe preferred to speed. So LSTM based engine seems the way to go, at least for devanagari scripts. I will test some of the other Indian languages later.
I have noticed some differences in processing between Hindi and the other Devanagari based languages and will add issues to the tessdata repository.
Thanks to the developers at Google and the tesseract community!
@theraysmith
I don't think I generated the original deu_frak. I have the fonts to do so with LSTM, but I don't know if I have a decent amount of corpus data to hand.
I have a decent amount of corpus data for Fraktur from scanned books at hand, about 500k lines in hOCR files (~50GB with TIF images). I've yet to publish it, but if you have somewhere where I could send/upload it, I'd be glad to.
Or is there a way to create the neccessary training files myself? I've had a cursory look through the OCR code and it looked like it needed lstmf
files, but I haven't yet found what these are supposed to look like.
I have a new training md file in prep with an update to the code to make it all work correctly. It is going through our review process, and then I will need to sync again with the changes that have happened since my last sync, but it should be available late this week. The md file documents the training process in tutorial detail, but line boxes and transcriptions sounds perfect!
500k lines should make it work really well. I would be happy to take it and help you, but we would have to get into licenses, copyright and all that first. For now it might be best to hang on for the instructions.
On Tue, Nov 29, 2016 at 2:05 PM, Johannes Baiter notifications@github.com wrote:
@theraysmith https://github.com/theraysmith
I don't think I generated the original deu_frak. I have the fonts to do so with LSTM, but I don't know if I have a decent amount of corpus data to hand.
I have a decent amount of corpus data for Fraktur (from scanned books) at hand (hOCR files with line boxes and transcriptions), it's about 500k lines and 50GB. I've yet to publish it, but if you have somewhere where I could send/upload it, I'd be glad to.
Or is there a way to create the neccessary training files myself? I've had a cursory look through the OCR code and it looked like it needed lstmf files, but I haven't yet found what these are supposed to look like.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tesseract-ocr/tesseract/issues/40#issuecomment-263714590, or mute the thread https://github.com/notifications/unsubscribe-auth/AL056WSh7RpK3l-EHGgky_xbByYqhbwSks5rDKGOgaJpZM4FOBFi .
-- Ray.
500k lines should make it work really well. I would be happy to take it and help you, but we would have to get into licenses, copyright and all that first.
The text is CC0 and the images are CC-BY-NC, so that shouldn't be an issue :-) They're going to be public anyway once I've prepped the dataset for publication. But even better if there are instructions, looking forward to playing around with training!
Ray,
Please see my recent comment and attached files in https://github.com/tesseract-ocr/tessdata/issues/6
Adding config files to trained data for san, mar and nep will fix this issue related to skipped text with default psm.
I made a copy of hin.config and changed the default engine to oem 4, LSTM. I also removed the blacklisting of 1, since Indo-arabic numbers in Latin scripts are used quite commonly with Devanagari script text.
There are various other Devanagari related options in the config file, which can be removed, if not needed with LSTM.
Thanks.
Cube is gone! Removal completed as of 9d6e4f6
Sad news: Cube is no longer with us.
Cube, you will be missed...
@jbaiter have you tried 4.0 training for Fraktur?
@theraysmith Is there a way to use the old box-tiff pairs at https://github.com/paalberti/tesseract-dan-fraktur for LSTM training?
Also see tesseract related issue at https://github.com/paalberti/tesseract-dan-fraktur/issues/3
Is there a way to use the old box-tiff pairs at https://github.com/paalberti/tesseract-dan-fraktur for LSTM training?
There will be a way to generate a box file from a tiff image. The box file will be written in the textline format https://github.com/tesseract-ocr/tesseract/issues/659#issuecomment-272564420 I started working on this today. I wrote the needed code and It seems to output the desired format, but I need to do some tests before publishing it.
@amitdo Not sure if that will work for Devanagari, because of the length of unicode string.
Is it possible to just add a box with the tab character at end of each line for existing box files?
Not sure if that will work for Devanagari, because of the length of unicode string.
We will wait and see...
Is it possible to just add a box with the tab character at end of each line for existing box files?
You mean manually? You should add box coordinates, not just tab.
What steps will reproduce the problem?
What is the expected output? What do you see instead? In both cases the output should be 1234567890.
What version of the product are you using? On what operating system? I've tried tesseract 3.03 both on mac and iOS.
Please provide any additional information below. There is a related thread in the Tesseract-OCR-iOS wrapper, where the issue was originally found: https://github.com/gali8/Tesseract-OCR-iOS/issues/140. You may ask for any additional info there.