mindee / doctr

docTR (Document Text Recognition) - a seamless, high-performing & accessible library for OCR-related tasks powered by Deep Learning.
https://mindee.github.io/doctr/
Apache License 2.0
3.51k stars 412 forks source link

Option to cutout documents from images before further processing #554

Open felixdittrich92 opened 2 years ago

felixdittrich92 commented 2 years ago

@fg-mindee @charlesmindee

what do you think about an option in Documentfile.from_images(.., try_cutout=True) which does the following: Example I have currently a modified, more stable version running in our company :)

Use Case for example mobile phone images from documents

Would be nice if i can implement this in doctr also :) What do you think ?

fg-mindee commented 2 years ago

Hey there :wave:

Actually we have tackled this internally a few weeks back and it will be integrated into docTR soon :smile: But this solution involves a DL model for segmentation. If you think this could benefit from a classic CV approach, we could discuss that option as well!

Cheers!

felixdittrich92 commented 2 years ago

Hi :wave:,

in this case i would say if you are ready with your model lets compare both ways :) I can prepare a notebook if you want which can be used for test purpose !? :smiley:

fg-mindee commented 2 years ago

That's a good idea indeed, if you could have a runnable Colab notebook so that we can compare this :+1: (not opening a PR, just sharing it here)

felixdittrich92 commented 2 years ago

Colab Example

Only a very basic example but for test purpose it should be enough :) Let me know if you need anything else :+1:

PS: i have also faiced that it works much better if it is resized much smaller and than before _four_point_transform calculate the points back in relation to the original size (not in the colab example)

fg-mindee commented 2 years ago

Thanks a lot, it looks promising for single page docs! To make sure this matches the same specs, could you illustrate a situation where there are several pages on the same image? (the image segmentation does process it correctly)

felixdittrich92 commented 2 years ago

@fg-mindee I think in this case it is really much more accurate to use the segmentation model do you have some benchmarks for this ? Or also a short colab ? 🤗

Have a nice weekend

fg-mindee commented 2 years ago

Well sure, but perhaps we could change your colab to make it work for multiple pages?

Regarding the segmentation option, no colab but it will be integrated into docTR within a week or 2!

felixdittrich92 commented 2 years ago

@fg-mindee yes sure we can do it but i think this will not work very well :smile: let me prepare a sec colab for this :)

felixdittrich92 commented 2 years ago

@fg-mindee will share the other notebook tomorrow

felixdittrich92 commented 2 years ago

@fg-mindee

  1. example now also multipage images: Colab Example Multipage

BUT: this works great but for prod it would be need many checks I'm really excited how accurate and fast your segmentation solution is :hugs: Have you tested also slightly overlapping documents ?

fg-mindee commented 2 years ago

Nice :+1:

I'm only concerned about the color filtering that seems to be key to the performances of this method. It's usually not robust in bad lightning conditions or any degrading conditions.

For the segmentation-based approach, I'll have to check and will let you know next week :+1:

felixdittrich92 commented 2 years ago

@fg-mindee That can be tackled with blurring. with this method the main problems are:

  1. finding the right treshold value
  2. if other rectangular objects in image
  3. overlapping documents or if 4 corners can´t be detected
fg-mindee commented 2 years ago

For sure, we need to conduct some thorough evaluations now to ensure that this method is robust (or can be made robust)! We'll check next week with @charlesmindee, in the meantime, if you have any idea to make it more robust, feel free to iterate on this approach :+1:

felixdittrich92 commented 2 years ago

@fg-mindee yes but it would be great to have your seg model to compare between a DL approach and this CV approach :smile:

felixdittrich92 commented 2 years ago

@fg-mindee any update on this ? :) Have you been able to successfully test your segmentation or does it make sense to stick with my approach here ? :smiley:

fg-mindee commented 2 years ago

We should be able to have something in December but for now there is already a lot on our plate :sweat_smile:

fg-mindee commented 2 years ago

@charlesmindee would you mind taking a look at integrating your implementation in docTR for release 0.6.0? :pray: (no hurry for now)

felixdittrich92 commented 2 years ago

@charlesmindee @frgfm any update if we will keep it for 0.6.0 ? :)

frgfm commented 2 years ago

This is more up to @charlesmindee for the integration 👍

Generally speaking:

So in this case, that should be kept for 0.6.0 yes :)

felixdittrich92 commented 1 year ago

@frgfm @charlesmindee If you want we could also include my model (worked on document segmentation for my company will be finished until end of the year) it is a mobilenet_v3_small with Pyramid Attention Network as segmentation head. Runs currently with 30 Fps on mobile devices (CPU) on i7 it takes ~1-2ms and reaches 96% mIoU (custom dataset). Works fine with onnxruntime or opencv's dnn. The only disadvantage would be i can only share the onnx model + inference code (not the pure model code because it's company internal stuff) wdyt ?

frgfm commented 1 year ago

Mmmh, I think we should consider document edge segmentation as a separate task that can be handled by docTR. That way, people could pass it to the corresponding model without making the core pipeline too complex for now

felixdittrich92 commented 1 year ago

Sounds good to me 👍

felixdittrich92 commented 3 months ago

Topic for contrib module