ymy-k / DPText-DETR

[AAAI'23 Oral] DPText-DETR: Towards Better Scene Text Detection with Dynamic Points in Transformer
Other
162 stars 22 forks source link

evaluation报错:Exception: The sample 0003115 not present in GT #39

Open a-vegatable-bird opened 1 month ago

a-vegatable-bird commented 1 month ago

非常感谢您的开源工作,我在尝试利用我的custorm-data训练时候,遇到这个问题。请问这个是我的数据标签的内容出现遗漏吗?能否请教回答,非常感谢您。 ....省略模型训练输出 [05/24 12:35:40 d2.utils.events]: eta: 0:59:39 iter: 959 total_loss: 5.663 loss_ce: 0.07613 loss_ctrl_points: 0.7339 loss_ce_0: 0.2235 loss_ctrl_points_0: 0.9569 loss_ce_1: 0.1119 loss_ctrl_points_1: 0.7779 loss_ce_2: 0.08364 loss_ctrl_points_2: 0.7342 loss_ce_3: 0.07504 loss_ctrl_points_3: 0.7327 loss_ce_4: 0.0761 loss_ctrl_points_4: 0.7341 loss_ce_enc: 0.0678 loss_bbox_enc: 0.09829 loss_giou_enc: 0.3514 time: 0.3070 data_time: 0.0014 lr: 2e-05 max_mem: 5302M [05/24 12:35:47 d2.utils.events]: eta: 0:59:38 iter: 979 total_loss: 6.035 loss_ce: 0.05531 loss_ctrl_points: 0.7331 loss_ce_0: 0.2261 loss_ctrl_points_0: 0.9199 loss_ce_1: 0.1128 loss_ctrl_points_1: 0.7784 loss_ce_2: 0.0791 loss_ctrl_points_2: 0.73 loss_ce_3: 0.06397 loss_ctrl_points_3: 0.7315 loss_ce_4: 0.05708 loss_ctrl_points_4: 0.732 loss_ce_enc: 0.04719 loss_bbox_enc: 0.1012 loss_giou_enc: 0.3392 time: 0.3073 data_time: 0.0015 lr: 2e-05 max_mem: 5302M [05/24 12:35:53 adet.data.datasets.text]: Loaded 435 images in COCO format from datasets/ctw1500/test_poly.json [05/24 12:35:53 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(1000, 1000), max_size=1280, sample_style='choice')] [05/24 12:35:53 d2.data.common]: Serializing 435 elements to byte tensors and concatenating them all ... [05/24 12:35:53 d2.data.common]: Serialized dataset takes 5.59 MiB [05/24 12:35:53 d2.evaluation.evaluator]: Start inference on 435 batches [05/24 12:35:55 d2.evaluation.evaluator]: Inference done 11/435. Dataloading: 0.0004 s/iter. Inference: 0.1324 s/iter. Eval: 0.0002 s/iter. Total: 0.1330 s/iter. ETA=0:00:56 [05/24 12:36:00 d2.evaluation.evaluator]: Inference done 46/435. Dataloading: 0.0008 s/iter. Inference: 0.1438 s/iter. Eval: 0.0002 s/iter. Total: 0.1448 s/iter. ETA=0:00:56 [05/24 12:36:05 d2.evaluation.evaluator]: Inference done 85/435. Dataloading: 0.0008 s/iter. Inference: 0.1369 s/iter. Eval: 0.0002 s/iter. Total: 0.1379 s/iter. ETA=0:00:48 [05/24 12:36:10 d2.evaluation.evaluator]: Inference done 124/435. Dataloading: 0.0008 s/iter. Inference: 0.1342 s/iter. Eval: 0.0008 s/iter. Total: 0.1358 s/iter. ETA=0:00:42 [05/24 12:36:15 d2.evaluation.evaluator]: Inference done 163/435. Dataloading: 0.0008 s/iter. Inference: 0.1325 s/iter. Eval: 0.0007 s/iter. Total: 0.1340 s/iter. ETA=0:00:36 [05/24 12:36:20 d2.evaluation.evaluator]: Inference done 203/435. Dataloading: 0.0008 s/iter. Inference: 0.1314 s/iter. Eval: 0.0006 s/iter. Total: 0.1328 s/iter. ETA=0:00:30 [05/24 12:36:25 d2.evaluation.evaluator]: Inference done 241/435. Dataloading: 0.0008 s/iter. Inference: 0.1316 s/iter. Eval: 0.0005 s/iter. Total: 0.1329 s/iter. ETA=0:00:25 [05/24 12:36:30 d2.evaluation.evaluator]: Inference done 280/435. Dataloading: 0.0008 s/iter. Inference: 0.1310 s/iter. Eval: 0.0007 s/iter. Total: 0.1325 s/iter. ETA=0:00:20 [05/24 12:36:35 d2.evaluation.evaluator]: Inference done 321/435. Dataloading: 0.0008 s/iter. Inference: 0.1298 s/iter. Eval: 0.0007 s/iter. Total: 0.1313 s/iter. ETA=0:00:14 [05/24 12:36:41 d2.evaluation.evaluator]: Inference done 360/435. Dataloading: 0.0008 s/iter. Inference: 0.1299 s/iter. Eval: 0.0006 s/iter. Total: 0.1313 s/iter. ETA=0:00:09 [05/24 12:36:46 d2.evaluation.evaluator]: Inference done 400/435. Dataloading: 0.0008 s/iter. Inference: 0.1295 s/iter. Eval: 0.0006 s/iter. Total: 0.1309 s/iter. ETA=0:00:04 [05/24 12:36:50 d2.evaluation.evaluator]: Total inference time: 0:00:56.166945 (0.130621 s / iter per device, on 1 devices) [05/24 12:36:50 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:55 (0.129224 s / iter per device, on 1 devices) [05/24 12:36:50 adet.evaluation.text_evaluation_det]: Saving results to output/r_50_poly/ctw1500/finetune/inference/text_results.json An invalid detection in temp_det_results/0003001.txt line 3 is removed ... An invalid detection in temp_det_results/0003001.txt line 20 is removed ... An invalid detection in temp_det_results/0003355.txt line 14 is removed ... An invalid detection in temp_det_results/0003355.txt line 16 is removed ... An invalid detection in temp_det_results/0003355.txt line 21 is removed ... An invalid detection in temp_det_results/0003355.txt line 38 is removed ... An invalid detection in temp_det_results/0003356.txt line 11 is removed ... An invalid detection in temp_det_results/0003356.txt line 19 is removed ... ......省略报错 Traceback (most recent call last): File "/home/lixinru/devdata/code_env/DPText-DETR/tools/train_net.py", line 291, in <module> launch( File "/home/lixinru/anaconda3/envs/dpdetr/lib/python3.9/site-packages/detectron2/engine/launch.py", line 82, in launch main_func(*args) File "/home/lixinru/devdata/code_env/DPText-DETR/tools/train_net.py", line 285, in main return trainer.train() File "/home/lixinru/devdata/code_env/DPText-DETR/tools/train_net.py", line 103, in train self.train_loop(self.start_iter, self.max_iter) File "/home/lixinru/devdata/code_env/DPText-DETR/tools/train_net.py", line 93, in train_loop self.after_step() File "/home/lixinru/anaconda3/envs/dpdetr/lib/python3.9/site-packages/detectron2/engine/train_loop.py", line 180, in after_step h.after_step() File "/home/lixinru/anaconda3/envs/dpdetr/lib/python3.9/site-packages/detectron2/engine/hooks.py", line 552, in after_step self._do_eval() File "/home/lixinru/anaconda3/envs/dpdetr/lib/python3.9/site-packages/detectron2/engine/hooks.py", line 525, in _do_eval results = self._func() File "/home/lixinru/anaconda3/envs/dpdetr/lib/python3.9/site-packages/detectron2/engine/defaults.py", line 453, in test_and_save_results self._last_eval_results = self.test(self.cfg, self.model) File "/home/lixinru/anaconda3/envs/dpdetr/lib/python3.9/site-packages/detectron2/engine/defaults.py", line 608, in test results_i = inference_on_dataset(model, data_loader, evaluator) File "/home/lixinru/anaconda3/envs/dpdetr/lib/python3.9/site-packages/detectron2/evaluation/evaluator.py", line 204, in inference_on_dataset results = evaluator.evaluate() File "/home/lixinru/devdata/code_env/DPText-DETR/adet/evaluation/text_evaluation_det.py", line 219, in evaluate text_result = self.evaluate_with_official_code(result_path, self._text_eval_gt_path) File "/home/lixinru/devdata/code_env/DPText-DETR/adet/evaluation/text_evaluation_det.py", line 178, in evaluate_with_official_code return text_eval_script_det.text_eval_main_det(det_file=result_path, gt_file=gt_path) File "/home/lixinru/devdata/code_env/DPText-DETR/adet/evaluation/text_eval_script_det.py", line 318, in text_eval_main_det return rrc_evaluation_funcs_det.main_evaluation(None, det_file, gt_file, default_evaluation_params, validate_data, File "/home/lixinru/devdata/code_env/DPText-DETR/adet/evaluation/rrc_evaluation_funcs_det.py", line 397, in main_evaluation validate_data_fn(p['g'], p['s'], evalParams) File "/home/lixinru/devdata/code_env/DPText-DETR/adet/evaluation/text_eval_script_det.py", line 54, in validate_data raise Exception("The sample %s not present in GT" %k) Exception: The sample 0003115 not present in GT

a-vegatable-bird commented 1 month ago

我把我的数据集全部查了一遍,标签都在阿。为什么呢?

a-vegatable-bird commented 1 month ago

原来是evaluation验证的时候,我的gt_ctw1500.zip和源码生成的det.zip路径对比时候不行,具体原因是我压缩gt_ctw1500.zip时候把文件压缩,应该是全选里面的txt然后再压缩,这样会造成gt_ctw1500.zip在匹配的时候出现ctw1500/000115.txt,不符合'0-9'的命名。作者后续可以优化或者提醒一下下噢。我太菜了,一开始以为是我的数据问题,花了一周检查数据,然后才考虑源码输出,不过事情终于解决拉/

fuzheng1209 commented 1 month ago

你好,我有问题关于这个代码训练和评估,这个代码评估为何需要gt的压缩包?json文件不应该就是对应的标签吗?

a-vegatable-bird commented 1 month ago

这个主要为了eval时候的验证,训练的时候并不需要。训练结果跟gt里面的文件做对比,得出precision,recall,hmean这些精度来判断你的数据和模型的精度。

li同学 @.***

 

------------------ 原始邮件 ------------------ 发件人: "ymy-k/DPText-DETR" @.>; 发送时间: 2024年6月4日(星期二) 晚上8:15 @.>; @.**@.>; 主题: Re: [ymy-k/DPText-DETR] evaluation报错:Exception: The sample 0003115 not present in GT (Issue #39)

你好,我有问题关于这个代码训练和评估,这个代码评估为何需要gt的压缩包?json文件不应该就是对应的标签吗?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

fuzheng1209 commented 1 month ago

谢谢你的回复,我之前好奇一个json文件,一个gt压缩文件,为何会有两个标签文件。现在我的理解可能是因为gt的文件是比较通过的标注方式,可以评估模型再其它数据集的效果。而json文件是本模型训练时需要的格式,主要用于训练。如果要评估我自己制作的数据集的性能,也要把测试集转换为gt压缩文件类似的格式?

a-vegatable-bird commented 1 month ago

对的,你参考官网的格式;不过注意是选中全部的标签再压缩,不然会路径会不对。 

li同学 @.***

 

------------------ 原始邮件 ------------------ 发件人: @.***>; 发送时间: 2024年6月4日(星期二) 晚上9:28 收件人: "ymy-k/DPText-DETR"; 抄送: "Author"; 主题: Re: [ymy-k/DPText-DETR] evaluation报错:Exception: The sample 0003115 not present in GT (Issue #39)

谢谢你的回复,我之前好奇一个json文件,一个gt压缩文件,为何会有两个标签文件。现在我的理解可能是因为gt的文件是比较通过的标注方式,可以评估模型再其它数据集的效果。而json文件是本模型训练时需要的格式,主要用于训练。如果要评估我自己制作的数据集的性能,也要把测试集转换为gt压缩文件类似的格式?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

fuzheng1209 commented 1 month ago

好的,谢谢