PaddlePaddle / PaddleOCR

Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)
https://paddlepaddle.github.io/PaddleOCR/
Apache License 2.0
44.13k stars 7.81k forks source link

Error while resuming training from pretrained detection model #2193

Closed Ankan1998 closed 2 years ago

Ankan1998 commented 3 years ago

Traceback (most recent call last): File "tools/train.py", line 121, in main(config, device, logger, vdl_writer) File "tools/train.py", line 98, in main eval_class, pre_best_model_dict, logger, vdl_writer) File "C:\Users\Ankan\Desktop\PaddleOCR-release-2.0\tools\program.py", line 236, in train post_result = post_process_class(preds, batch[1]) File "C:\Users\Ankan\Desktop\PaddleOCR-release-2.0\ppocr\postprocess\east_postprocess.py", line 132, in call src_h, src_w, ratio_h, ratio_w = shape_list[ino] ValueError: not enough values to unpack (expected 4, got 1)

config files Global: use_gpu: true epoch_num: 8890 log_smooth_window: 20 print_batch_step: 2 save_model_dir: ./output/east_r50_vd/ save_epoch_step: 2

evaluation is run every 5000 iterations after the 4000th iteration

eval_batch_step: [4, 1]

if pretrained_model is saved in static mode, load_static_weights must set to True

load_static_weights: True cal_metric_during_train: True pretrained_model: ./pretrain_models/ResNet50_vd_pretrained/ checkpoints: save_inference_dir: use_visualdl: True infer_img: save_res_path: ./output/det_east/predicts_east.txt

Architecture: model_type: det algorithm: EAST Transform: Backbone: name: ResNet layers: 50 Neck: name: EASTFPN model_name: large Head: name: EASTHead model_name: large

Loss: name: EASTLoss

Optimizer: name: Adam beta1: 0.9 beta2: 0.999 lr:

name: Cosine

learning_rate: 0.001

warmup_epoch: 0

regularizer: name: 'L2' factor: 0

PostProcess: name: EASTPostProcess score_thresh: 0.8 cover_thresh: 0.1 nms_thresh: 0.2

Metric: name: DetMetric main_indicator: hmean

Train: dataset: name: SimpleDataSet data_dir: ./train_data/text_localization label_file_list:

Eval: dataset: name: SimpleDataSet data_dir: ./train_data/text_localization label_file_list:

Image img_603

Label t_img/img_603.jpg;[{"transcription": "###", "points": [[552, 340], [564, 340], [563, 358], [551, 358]]}, {"transcription": "###", "points": [[564, 339], [577, 339], [576, 349], [564, 348]]}, {"transcription": "###", "points": [[122, 264], [175, 266], [168, 302], [116, 300]]}, {"transcription": "###", "points": [[230, 286], [244, 287], [240, 307], [226, 306]]}, {"transcription": "###", "points": [[245, 285], [283, 288], [271, 346], [232, 343]]}, {"transcription": "###", "points": [[278, 288], [318, 291], [315, 318], [275, 316]]}, {"transcription": "###", "points": [[213, 226], [233, 228], [227, 248], [207, 246]]}, {"transcription": "###", "points": [[13, 264], [43, 267], [42, 307], [12, 304]]}, {"transcription": "###", "points": [[53, 250], [112, 254], [100, 336], [41, 332]]}, {"transcription": "###", "points": [[466, 323], [487, 324], [485, 338], [464, 337]]}, {"transcription": "Compare!", "points": [[231, 225], [343, 233], [342, 253], [230, 245]]}, {"transcription": "###", "points": [[437, 325], [463, 326], [459, 354], [432, 353]]}, {"transcription": "###", "points": [[685, 272], [717, 271], [718, 285], [686, 286]]}, {"transcription": "###", "points": [[687, 288], [707, 286], [707, 302], [687, 304]]}, {"transcription": "###", "points": [[690, 306], [713, 306], [711, 318], [688, 319]]}, {"transcription": "###", "points": [[184, 542], [223, 546], [221, 562], [182, 558]]}, {"transcription": "###", "points": [[68, 534], [106, 536], [104, 552], [67, 551]]}, {"transcription": "###", "points": [[56, 632], [93, 634], [91, 650], [54, 649]]}, {"transcription": "###", "points": [[179, 635], [217, 638], [215, 654], [177, 651]]}, {"transcription": "###", "points": [[415, 288], [458, 289], [457, 306], [414, 305]]}, {"transcription": "###", "points": [[420, 328], [429, 328], [429, 340], [420, 340]]}, {"transcription": "###", "points": [[4, 158], [120, 165], [116, 197], [0, 190]]}, {"transcription": "###", "points": [[4, 189], [114, 195], [114, 207], [3, 201]]}, {"transcription": "###", "points": [[1114, 263], [1146, 260], [1146, 301], [1114, 304]]}, {"transcription": "###", "points": [[1097, 260], [1114, 260], [1112, 283], [1095, 283]]}, {"transcription": "###", "points": [[1056, 256], [1073, 252], [1072, 261], [1055, 265]]}, {"transcription": "###", "points": [[1059, 263], [1090, 259], [1088, 268], [1057, 272]]}, {"transcription": "###", "points": [[1059, 276], [1076, 273], [1073, 280], [1057, 283]]}]

N.B I changed delimiter from '\t' to' ;'

Any help is much appreciated

thongvhoang commented 2 years ago

You replace cal_metric_during_train: True by cal_metric_during_train: False

paddle-bot-old[bot] commented 2 years ago

Since you haven\'t replied for more than 3 months, we have closed this issue/pr. If the problem is not solved or there is a follow-up one, please reopen it at any time and we will continue to follow up. It is recommended to pull and try the latest code first. 由于您超过三个月未回复,我们将关闭这个issue/pr。 若问题未解决或有后续问题,请随时重新打开(建议先拉取最新代码进行尝试),我们会继续跟进。

duong0411 commented 1 year ago

You replace cal_metric_during_train: True by cal_metric_during_train: False

I want to show accracur during train detection and i just replaced cal_mertic_during_train = True and have a error like this