uakarsh / latr

Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answering (STVQA)
https://uakarsh.github.io/latr/
MIT License
52 stars 7 forks source link

error in max_step #10

Open mohanades opened 1 year ago

mohanades commented 1 year ago

Hi, when I am trying to train your source code in 5'th epoch (maybe or less maybe or more) I encountered error that stop training. so I increase max_step . But when I increase max_step(max_step==100K) I have this problem with loss and acc that

loss>100 && acc==0 .I attach screen of this problem. What changing I should do in source code to continue training model without this problem?

error

uakarsh commented 1 year ago

Hi @mohanades, I have realized some time back that, I have prepared the task in wrong way.

So, here is what I have done

I have tokenized the answers and then have trained a token classification kind of model. And when measuring the accuracy, I measure the accuracy on just the tokens that are in the answers and ignore the others.

But, here is what I should do instead

Given an ocr, an answer, I should find the answer in OCR and give two indices, one is start and other is end, and that means in the ocr from start to end indices lies the answer. And train a model to predict these two indices.

And, in this way, I believe the authors have came across the different metrics used in the paper.

I am learning (and have used this for the first time), so maybe this was a mistake from my side. Again apologies. Sorry for tagging @furkanbiten, but maybe can you reflect if this is what, I have correctly understood.

Regards, Akarsh