shabie / docformer

Implementation of DocFormer: End-to-End Transformer for Document Understanding, a multi-modal transformer based architecture for the task of Visual Document Understanding (VDU)
MIT License
255 stars 40 forks source link

Pre-trained models #43

Open caop-kie opened 2 years ago

caop-kie commented 2 years ago

Thanks for the great work! Do you have any plan to release the pre-trained model of docformer?

uakarsh commented 2 years ago

Hi @AYSP, thanks for your appreciation. We have the scripts ready as for now, to pre-train DocFormer, but not sure if it would produce the extact same results as that of paper, since the author basically didn't describe the exact collection of data they used for pre-training (although it was RVL-CDIP), and beside that, there is resource constraint with us, so that also makes it a bit difficult to pre-train.

Regards, Akarsh

jmandivarapu1 commented 1 year ago

@uakarsh Can you release the existing pre-training code. Even thought it doesn't produce good results it would be good as an starting point.

uakarsh commented 1 year ago

Hi @jmandivarapu1,

Although I didn't write the entire code, but I did write till the part where the pytorch dataset object could be made and one iteration/batch's forward and backward pass could be done

Here is the code https://github.com/shabie/docformer/blob/master/examples/DocFormer_for_MLM.ipynb

Hope it helps.

uakarsh commented 1 year ago

I would be working from my side for MLM (although there are 3 pre-training task) and would update shortly.

Thanks,

uakarsh commented 1 year ago

Hi @jmandivarapu1 @AYSP can you guys again try the fine-tuning using the pre-trained weights (I have attached them in the readme)