hikopensource / DAVAR-Lab-OCR

OCR toolbox from Davar-Lab
Apache License 2.0
731 stars 156 forks source link

When to release the evaluation concerning video text spotting evaluation metric #4

Closed weijiawu closed 3 years ago

weijiawu commented 3 years ago

Thank contributors for the great open sources. There are some video text spotting algorithms(YORO, FREE) that will be released in Todo. And I also want to know that whether the corresponding metric also will be released, and when to be released.

qiaoliang6 commented 3 years ago

We will release the trained model as well as the evaluation scripts. The work is currently under migration and expected to be released in the middle of August.

weijiawu commented 3 years ago

Thanks, looking forward to the update.

WeihongM commented 3 years ago

Hello, I am not very familiar with this field. Why are the results of your papers inconsistent with the results displayed on the website? RRC competition The MOTA is relatively low. @qiaoliang6 @DAVAR-Lab Is the evaluation script the same? What is more, I found that some texts are very vague(unclear). Are these also considered in the indicator calculation?

WeihongM commented 3 years ago

@qiaoliang6 This is not the meaning, the score in your ACM MM version paper is higher than RRC website version. So I dont understand whether you use different evaluation script. The MOTA, MOTP score is 43.16%, 76.78% in the rrc website. However in your paper(ACM MM version), score(MOTA, MOTP) is 68, 76 on IC15 detection phase.

weijiawu commented 3 years ago

@WeihongM @qiaoliang6

Actually, the current script is different compare with the ICDAR website. I have consulted the ICDAR official by email for the discrepancy. The main reason might be that from 2020 the ICDAR committee has adopted the new evaluation framework of the CVPR19 MOTChallenge (Multiple Object Tracking) benchmarks [1] for the Text in Videos Challenge.

Besides, I look forward to the release of the metric as soon as possible. Because metric is the foundation for the algorithm evaluation and comparison.

[1] Dendorfer, P., Rezatofighi, H., Milan, A., Shi, J., Cremers, D., Reid, I., Roth, S., Schindler, K., & Leal-Taixe, L. (2019). CVPR19 Tracking and Detection Challenge: How crowded can it get?. arXiv preprint arXiv:1906.04567.

qiaoliang6 commented 3 years ago

@qiaoliang6 This is not the meaning, the score in your ACM MM version paper is higher than RRC website version. So I dont understand whether you use different evaluation script. The MOTA, MOTP score is 43.16%, 76.78% in the rrc website. However in your paper(ACM MM version), score(MOTA, MOTP) is 68, 76 on IC15 detection phase.

We have just checked the issue, and it indeed likes what is mentioned by @weijiawu. The reason lies in the inconsistency of evaluation method. As can be found from the website, they have updated the evaluation method last year (using to a new evaluation framework), which causes the performance of all the methods in the ranking table significantly dropped. You can compare the results in the original competition report with the current results from the website, including our method. And we do not consider unclear texts.

We will release the evaluation script we used when the model's code is published.

weijiawu commented 3 years ago

@qiaoliang6 hello, thanks for the valuable open-source again. Although the author claims that "The work and metrics are currently under migration and expected to be released in the middle of August.", there are still a lot of uncertainties. Could the contributor first provide some related paper or coding concerning the MOTA, MOTP metric for references? Because I have suffered from the issue for a long time. Looking forward to your reply

WeihongM commented 3 years ago

@qiaoliang6 Yes, I think so too. It is unreasonable that there is no public evaluation script in a field. In my opinion, the share of the evaluation script will not affect the trained models.

@qiaoliang6 hello, thanks for the valuable open-source again. Although the author claims that "The work and metrics are currently under migration and expected to be released in the middle of August.", there are still a lot of uncertainties. Could the contributor first provide some related paper or coding concerning the MOTA, MOTP metric for references? Because I have suffered from the issue for a long time. Looking forward to your reply

qiaoliang6 commented 3 years ago

@qiaoliang6 hello, thanks for the valuable open-source again. Although the author claims that "The work and metrics are currently under migration and expected to be released in the middle of August.", there are still a lot of uncertainties. Could the contributor first provide some related paper or coding concerning the MOTA, MOTP metric for references? Because I have suffered from the issue for a long time. Looking forward to your reply

You can refer to D. Karatzas et al., "ICDAR 2013 robust reading competition,” in Proc. 12th Int. Conf. Document Anal. Recognit., Aug. 2013, pp. 1484–1493"