Closed nissansz closed 6 months ago
[2023/06/05 15:09:52] ppocr INFO: epoch: [1/5], global_step: 17100, lr: 0.000500, acc: 0.874999, norm_edit_dis: 0.993238, loss: 0.266314, avg_reader_cost: 0.15862 s, avg_batch_cost: 0.57674 s, avg_samples: 8.0, ips: 13.87104 samples/s, eta: 104 days, 12:43:17 [2023/06/05 15:10:52] ppocr INFO: epoch: [1/5], global_step: 17200, lr: 0.000500, acc: 0.874999, norm_edit_dis: 0.993034, loss: 0.292434, avg_reader_cost: 0.17781 s, avg_batch_cost: 0.59422 s, avg_samples: 8.0, ips: 13.46293 samples/s, eta: 104 days, 13:09:31 [2023/06/05 15:11:49] ppocr INFO: epoch: [1/5], global_step: 17300, lr: 0.000500, acc: 0.999999, norm_edit_dis: 1.000000, loss: 0.142027, avg_reader_cost: 0.15904 s, avg_batch_cost: 0.57395 s, avg_samples: 8.0, ips: 13.93850 samples/s, eta: 104 days, 13:04:50 [2023/06/05 15:12:46] ppocr INFO: epoch: [1/5], global_step: 17400, lr: 0.000500, acc: 0.874999, norm_edit_dis: 0.991629, loss: 0.389907, avg_reader_cost: 0.15339 s, avg_batch_cost: 0.56702 s, avg_samples: 8.0, ips: 14.10895 samples/s, eta: 104 days, 12:49:46 [2023/06/05 15:13:45] ppocr INFO: epoch: [1/5], global_step: 17500, lr: 0.000500, acc: 0.999999, norm_edit_dis: 1.000000, loss: 0.436357, avg_reader_cost: 0.17739 s, avg_batch_cost: 0.59199 s, avg_samples: 8.0, ips: 13.51364 samples/s, eta: 104 days, 13:12:09 [2023/06/05 15:14:44] ppocr INFO: epoch: [1/5], global_step: 17600, lr: 0.000500, acc: 0.874999, norm_edit_dis: 0.992544, loss: 0.364459, avg_reader_cost: 0.16807 s, avg_batch_cost: 0.58562 s, avg_samples: 8.0, ips: 13.66062 samples/s, eta: 104 days, 13:24:49 [2023/06/05 15:15:43] ppocr INFO: epoch: [1/5], global_step: 17700, lr: 0.000500, acc: 0.874999, norm_edit_dis: 0.990152, loss: 0.652746, avg_reader_cost: 0.17348 s, avg_batch_cost: 0.59129 s, avg_samples: 8.0, ips: 13.52984 samples/s, eta: 104 days, 13:45:41
1.要确认一下是否是过拟合,过拟合需要样本上提升
单行识别,我是转换成onnx模型,再使用,问题就是类似以下这种,比较轻微
表格区域用表格模型,没法用onnx进行识别,用的是infer模型,就是上面这种问题,比较明显,det模型是paddle自带,各种参数log如下
[2023/06/05 11:07:10] ppocr DEBUG: Namespace(alpha=1.0, benchmark=False, beta=1.0, cls_batch_num=6, cls_image_shape='3, 48, 192', cls_model_dir=None, cls_thresh=0.9, cpu_threads=10, crop_res_save_dir='./output', det=True, det_algorithm='DB', det_box_type='quad', det_db_box_thresh=0.6, det_db_score_mode='fast', det_db_thresh=0.3, det_db_unclip_ratio=1.5, det_east_cover_thresh=0.1, det_east_nms_thresh=0.2, det_east_score_thresh=0.8, det_limit_side_len=960, det_limit_type='max', det_model_dir='C:\Users\Ni/.paddleocr/whl\det\ch\ch_PP-OCRv3_det_infer', det_pse_box_thresh=0.85, det_pse_min_area=16, det_pse_scale=1, det_pse_thresh=0, det_sast_nms_thresh=0.2, det_sast_score_thresh=0.5, draw_img_save_dir='./inference_results', drop_score=0.5, e2e_algorithm='PGNet', e2e_char_dict_path='./ppocr/utils/ic15_dict.txt', e2e_limit_side_len=768, e2e_limit_type='max', e2e_model_dir=None, e2e_pgnet_mode='fast', e2e_pgnet_score_thresh=0.5, e2e_pgnet_valid_set='totaltext', enable_mkldnn=False, fourier_degree=5, gpu_mem=500, help='==SUPPRESS==', image_dir=None, image_orientation=False, ir_optim=True, kie_algorithm='LayoutXLM', label_list=['0', '180'], lang='ch', layout=False, layout_dict_path='F:\pycharm2020.2\PaddleOCR-release-2.6\ppocr\utils\dict\layout_dict\layout_cdla_dict.txt', layout_model_dir='C:\Users\Ni/.paddleocr/whl\layout\picodet_lcnet_x1_0_fgd_layout_cdla_infer', layout_nms_threshold=0.5, layout_score_threshold=0.5, max_batch_size=10, max_text_length=25, merge_no_span_structure=True, min_subgraph_size=15, mode='structure', ocr=True, ocr_order_method=None, ocr_version='PP-OCRv3', output='./output', page_num=0, precision='fp32', process_id=0, re_model_dir=None, rec=True, rec_algorithm='SVTR_LCNet', rec_batch_num=6, rec_char_dict_path='L:\resnet34\4lan\jpdict\japan_dict.txt', rec_image_inverse=True, rec_image_shape='3, 48, 320', rec_model_dir='C:\Users\Ni\Desktop\jp5fontsres34\best_accuracy', recovery=False, return_ocr_result_in_table=True, save_crop_res=False, save_log_path='./log_output/', scales=[8, 16, 32], ser_dict_path='../train_data/XFUND/class_list_xfun.txt', ser_model_dir=None, show_log=True, sr_batch_num=1, sr_image_shape='3, 32, 128', sr_model_dir=None, structure_version='PP-StructureV2', table=True, table_algorithm='TableAttn', table_char_dict_path='F:\pycharm2020.2\PaddleOCR-release-2.6\ppocr\utils\dict\table_structure_dict_ch.txt', table_max_len=488, table_model_dir='L:/paddle/models/ch_ppstructure_mobile_v2.0_SLANet_infer', total_process_num=1, type='ocr', use_angle_cls=False, use_dilation=False, use_gpu=True, use_mp=False, use_npu=False, use_onnx=False, use_pdf2docx_api=False, use_pdserving=False, use_space_char=True, use_tensorrt=False, use_visual_backbone=True, use_xpu=False, vis_font_path='./doc/fonts/simfang.ttf', warmup=False) [2023/06/05 11:07:10] ppocr WARNING: When args.layout is false, args.ocr is automatically set to false
表格区域的识别中,rec网络你使用V3,还是 Res34? 看log像是V3 -- ocr_version='PP-OCRv3', 自己训练的还是Paddle pretrain的?
表格区域的识别中,rec网络你使用resnet34, rec_model_dir='C:\Users\Ni\Desktop\jp5fontsres34\best_accuracy',
但表格区域不知道怎么用onnx
增加db文字检测,常规文本在数据量充足的情况下,ACC基本都能达到很高,如果有检测层,那就要检查一下是检测的问题,还是识别的问题,max——lenth是否够大,如果都不行,再检查超参
Optimizer中有一个参数是warmup_epoch,在热身的几轮中,学习率、acc、norm_edit_dis变动较大是正常的
表格区域单独识别,效果差,不知道怎么改善
[2023/06/05 11:07:10] ppocr DEBUG: Namespace(alpha=1.0, benchmark=False, beta=1.0, cls_batch_num=6, cls_image_shape='3, 48, 192', cls_model_dir=None, cls_thresh=0.9, cpu_threads=10, crop_res_save_dir='./output', det=True, det_algorithm='DB', det_box_type='quad', det_db_box_thresh=0.6, det_db_score_mode='fast', det_db_thresh=0.3, det_db_unclip_ratio=1.5, det_east_cover_thresh=0.1, det_east_nms_thresh=0.2, det_east_score_thresh=0.8, det_limit_side_len=960, det_limit_type='max', det_model_dir='C:\Users\Ni/.paddleocr/whl\det\ch\ch_PP-OCRv3_det_infer', det_pse_box_thresh=0.85, det_pse_min_area=16, det_pse_scale=1, det_pse_thresh=0, det_sast_nms_thresh=0.2, det_sast_score_thresh=0.5, draw_img_save_dir='./inference_results', drop_score=0.5, e2e_algorithm='PGNet', e2e_char_dict_path='./ppocr/utils/ic15_dict.txt', e2e_limit_side_len=768, e2e_limit_type='max', e2e_model_dir=None, e2e_pgnet_mode='fast', e2e_pgnet_score_thresh=0.5, e2e_pgnet_valid_set='totaltext', enable_mkldnn=False, fourier_degree=5, gpu_mem=500, help='==SUPPRESS==', image_dir=None, image_orientation=False, ir_optim=True, kie_algorithm='LayoutXLM', label_list=['0', '180'], lang='ch', layout=False, layout_dict_path='F:\pycharm2020.2\PaddleOCR-release-2.6\ppocr\utils\dict\layout_dict\layout_cdla_dict.txt', layout_model_dir='C:\Users\Ni/.paddleocr/whl\layout\picodet_lcnet_x1_0_fgd_layout_cdla_infer', layout_nms_threshold=0.5, layout_score_threshold=0.5, max_batch_size=10, max_text_length=25, merge_no_span_structure=True, min_subgraph_size=15, mode='structure', ocr=True, ocr_order_method=None, ocr_version='PP-OCRv3', output='./output', page_num=0, precision='fp32', process_id=0, re_model_dir=None, rec=True, rec_algorithm='SVTR_LCNet', rec_batch_num=6, rec_char_dict_path='L:\resnet34\4lan\jpdict\japan_dict.txt', rec_image_inverse=True, rec_image_shape='3, 48, 320', rec_model_dir='C:\Users\Ni\Desktop\jp5fontsres34\best_accuracy', recovery=False, return_ocr_result_in_table=True, save_crop_res=False, save_log_path='./log_output/', scales=[8, 16, 32], ser_dict_path='../train_data/XFUND/class_list_xfun.txt', ser_model_dir=None, show_log=True, sr_batch_num=1, sr_image_shape='3, 32, 128', sr_model_dir=None, structure_version='PP-StructureV2', table=True, table_algorithm='TableAttn', table_char_dict_path='F:\pycharm2020.2\PaddleOCR-release-2.6\ppocr\utils\dict\table_structure_dict_ch.txt', table_max_len=488, table_model_dir='L:/paddle/models/ch_ppstructure_mobile_v2.0_SLANet_infer', total_process_num=1, type='ocr', use_angle_cls=False, use_dilation=False, use_gpu=True, use_mp=False, use_npu=False, use_onnx=False, use_pdf2docx_api=False, use_pdserving=False, use_space_char=True, use_tensorrt=False, use_visual_backbone=True, use_xpu=False, vis_font_path='./doc/fonts/simfang.ttf', warmup=False) [2023/06/05 11:07:10] ppocr WARNING: When args.layout is false, args.ocr is automatically set to false
@nissansz ,检查一下表格区域,文字识别部分,输入的尺寸是否和训练一致 rec_image_shape='3, 48, 320',
@WYH150123 表格区域尺寸怎么检查,怎么resize? 不是自动resize?
@nissansz , Res34模型训练,ec_model_dir='C:\Users\Ni\Desktop\jp5fontsres34\best_accuracy', 的配置文件,发一下
@nissansz , paddle ocr 表格识别部分代码原始用的是ppocr V3,输入尺寸是 rec_image_shape='3, 48, 320', 而 Res34的配置通常是 32高,有可能是这个问题,需要确认一下
@nissansz ,更正一下,讲的是表格区域的文字识别,你是不是替换原始代码使用V3的模型为你自己训练的 Res34的识别 模型?输入尺寸有可能是不匹配的
都是48高度
------------------ 原始邮件 ------------------
发件人: WYH150123 @.***>
发送时间: 2023-06-05 17:48:08
收件人:PaddlePaddle/PaddleOCR @.***>
抄送:nissanjp @.>,Mention @.>
主题: Re: [PaddlePaddle/PaddleOCR] 如何文本识别大幅改善准确率? (Issue #10090)
@nissansz , paddle ocr 表格识别部分代码原始用的是ppocr V3,输入尺寸是 rec_image_shape='3, 48, 320', 而 Res34的配置通常是 32高,有可能是这个问题,需要确认一下
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
可能还是 表格模型不够准造成的。
用table generation 生成表格进行训练,大概要多久,准确度能多高?
有没有办法在表格识别的时候,改为用onnx模型识别单元格单行?
------------------ 原始邮件 ------------------
发件人: WYH150123 @.***>
发送时间: 2023-06-05 17:53:09
收件人:PaddlePaddle/PaddleOCR @.***>
抄送:nissanjp @.>,Mention @.>
主题: Re: [PaddlePaddle/PaddleOCR] 如何文本识别大幅改善准确率? (Issue #10090)
@nissansz ,更正一下,讲的是表格区域的文字识别,你是不是替换原始代码使用V3的模型为你自己训练的 Res34的识别 模型?输入尺寸有可能是不匹配的
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
如果将表格当成普通区域,用ctpn+onnx识别,准确率是没问题的。 现在就是不知道怎样将ctpn+onnx识别结果,怎样填充到表格识别的html单元格内,有填充代码吗
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
请提供下述完整信息以便快速定位问题/Please provide the following information to quickly locate the problem
如何文本识别大幅改善准确率? resnet backbone训练,除了增加hard case,还有其他办法改善漏字,错字吗? 看paddle mobilenet有teacher, student 配置文件改善, resnet说要改源代码,不会改代码