clovaai / donut

Official Implementation of OCR-free Document Understanding Transformer (Donut) and Synthetic Document Generator (SynthDoG), ECCV 2022
https://arxiv.org/abs/2111.15664
MIT License
5.51k stars 443 forks source link

Task | Understanding Paragraphs & Document Layout Analysis #20

Open jordanparker-me opened 1 year ago

jordanparker-me commented 1 year ago

Thanks for publishing this interesting work.

Would I be able to extend the Document Understanding task to learn hierarchies over paragraphs of text within a page? Or is the 512 token limit going to prohibit the OCR of paragraphs?

Would the input look like so?

{
    "file_name": {image_path1}, 
    "ground_truth": `{ 
          "item": [{ "title": "item-title", "text": "insert some paragraph of text"], 
          "table": ["column 1 column 2 column 3 0 0 3], 
          "title"; ["title page"] 
    }`
}

Further, would it be possible to alter the task objective to Document Layout analysis and train on PubLayNet as per LayoutLMv3?

logan-markewich commented 1 year ago

Just an idea, but you could probably model document hierarchy by adding special tokens to the tokenizer.

For example, DocVQA is trained using an input like 'my question?', and the output then completes the input -> 'my question?my answer'

So following that logic, you could construct an input prompt as simply '' and then the output could be 'My titleMy paragraph text'

This would work for training, as long as you add the proper special tokens.

In terms of the token limit though, you would definitely run into some problems. If you cared more about the structure, maybe you could just predict the first line of each hierarchical element to limit the prediction size. If you are working with synthetically generated data, this would be pretty easy to do!

In terms of doing DLA, by outputting the hierarchy you are already performing a type of layout analysis. Adapting existing datasets might be tough though, but maybe check out DocBank. They have token-level layout annotations that would work well for a text based model like this one

jordanparker6 commented 1 year ago

@logan-markewich I like your approach.

What's your thoughts on replacing the BART encoder with one of the long-rage Transformer architecture (e.g. LongFormer or Performer)?

I don't have much experience with them but I understand that have O(n) complexity instead of O(n2) with the sequence length.

logan-markewich commented 1 year ago

@jordanparker6 I don't have any experience using long-range transformers, but it's probably worth trying! Assuming you have the resources needed to load and train the model 👍🏻

Thankfully looking at the code, swapping out BART for anything else should be pretty straightforward, and you won't have to repeat the OCR pre-training either.

jlia0 commented 11 months ago

Any updates on this? Why is Donut difficult for Document Layout Analysis task?