GXYM / TextBPN-Plus-Plus

Arbitrary Shape Text Detection via Boundary Transformer;The paper at: https://arxiv.org/abs/2205.05320, which has been accepted by IEEE Transactions on Multimedia (T-MM 2023).
175 stars 38 forks source link

The results of running on the TD500 dataset are quite different from those given in the paper #8

Closed HHeracles closed 1 year ago

HHeracles commented 2 years ago

Thanks for your work, but when I replicated your algorithm on the TD500 dataset, I found a problem. We downloaded the dataset and the model you provided and ran the test instruction “CUDA_LAUNCH_BLOCKING=1 python3 eval_textBPN.py --net resnet50 --scale 1 --exp_name TD500 --checkepoch 107000 --test_size 640 960 --dis_threshold 0.35 --cls_threshold 0.9 --gpu 0”, and the performance data obtained was {"precision": 0.6811594202898551, "recall": 0.6460481099656358, "hmean": 0.6631393298059965, "AP": 0}, which was quite different from the data provided in the paper. How can I set it to reproduce the performance data you gave?

GXYM commented 2 years ago

Thanks for your work, but when I replicated your algorithm on the TD500 dataset, I found a problem. We downloaded the dataset and the model you provided and ran the test instruction “CUDA_LAUNCH_BLOCKING=1 python3 eval_textBPN.py --net resnet50 --scale 1 --exp_name TD500 --checkepoch 107000 --test_size 640 960 --dis_threshold 0.35 --cls_threshold 0.9 --gpu 0”, and the performance data obtained was {"precision": 0.6811594202898551, "recall": 0.6460481099656358, "hmean": 0.6631393298059965, "AP": 0}, which was quite different from the data provided in the paper. How can I set it to reproduce the performance data you gave?

This may be due to the high version of opencv(opencv-python < 4.5.0). In our experiment, we found that too high opencv version will lead to abnormal evaluation results. we use the opencv-python == 4.5.0