Open jordanparker-me opened 1 year ago
Just an idea, but you could probably model document hierarchy by adding special tokens to the tokenizer.
For example, DocVQA is trained using an input like '
So following that logic, you could construct an input prompt as simply '
This would work for training, as long as you add the proper special tokens.
In terms of the token limit though, you would definitely run into some problems. If you cared more about the structure, maybe you could just predict the first line of each hierarchical element to limit the prediction size. If you are working with synthetically generated data, this would be pretty easy to do!
In terms of doing DLA, by outputting the hierarchy you are already performing a type of layout analysis. Adapting existing datasets might be tough though, but maybe check out DocBank. They have token-level layout annotations that would work well for a text based model like this one
@logan-markewich I like your approach.
What's your thoughts on replacing the BART encoder with one of the long-rage Transformer architecture (e.g. LongFormer or Performer)?
I don't have much experience with them but I understand that have O(n) complexity instead of O(n2) with the sequence length.
@jordanparker6 I don't have any experience using long-range transformers, but it's probably worth trying! Assuming you have the resources needed to load and train the model 👍🏻
Thankfully looking at the code, swapping out BART for anything else should be pretty straightforward, and you won't have to repeat the OCR pre-training either.
Any updates on this? Why is Donut difficult for Document Layout Analysis task?
Thanks for publishing this interesting work.
Would I be able to extend the Document Understanding task to learn hierarchies over paragraphs of text within a page? Or is the 512 token limit going to prohibit the OCR of paragraphs?
Would the input look like so?
Further, would it be possible to alter the task objective to Document Layout analysis and train on PubLayNet as per LayoutLMv3?