PaddlePaddle / PaddleOCR

Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)
https://paddlepaddle.github.io/PaddleOCR/
Apache License 2.0
43.02k stars 7.72k forks source link

ValueError: (InvalidArgument) The conv2d Op's Input Variable Input contains uninitialized Tensor. #8847

Closed TapendraBaduwal closed 1 year ago

TapendraBaduwal commented 1 year ago

i am facing this issue when i use inference.pdiparams, inference.pdiparams.info ,inference.pdmodel models for layout analysis.

ValueError: (InvalidArgument) The conv2d Op's Input Variable Input contains uninitialized Tensor. [Hint: Expected t->IsInitialized() == true, but received t->IsInitialized():0 != true:1.] (at /paddle/paddle/fluid/framework/operator.cc:2094) [operator < conv2d > error]

an1018 commented 1 year ago

what's your version of paddlapaddle?

TapendraBaduwal commented 1 year ago

@an1018 PaddlePaddle 2.4.1, compiled with with_avx: ON with_gpu: OFF with_mkl: ON with_mkldnn: ON with_python: ON

an1018 commented 1 year ago

You can try install PaddlePaddle 2.3

TapendraBaduwal commented 1 year ago

@an1018 same error with paddle 2.3 --- fused 0 elementwise_mul with hardswish activation --- fused 0 elementwise_mul with sqrt activation --- fused 0 elementwise_mul with abs activation --- fused 0 elementwise_mul with clip activation --- fused 0 elementwise_mul with gelu activation --- fused 0 elementwise_mul with relu6 activation --- fused 0 elementwise_mul with sigmoid activation Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/tapendra/Desktop/layout/layout_analysis.py", line 71, in doc_layout_recognition_result = doc_layout_analysis.layout_recognition_bbox_sorting() File "/home/tapendra/Desktop/layout/layout_analysis.py", line 29, in layout_recognition_bbox_sorting layout_result = layout_engine(numpyimg) File "/home/tapendra/.local/lib/python3.8/site-packages/paddleocr/paddleocr.py", line 645, in call res, = super().call( File "/home/tapendra/.local/lib/python3.8/site-packages/paddleocr/ppstructure/predict_system.py", line 112, in call layout_res, elapse = self.layout_predictor(img) File "/home/tapendra/.local/lib/python3.8/site-packages/paddleocr/ppstructure/layout/predict_layout.py", line 86, in call self.predictor.run() ValueError: (InvalidArgument) The conv2d Op's Input Variable Input contains uninitialized Tensor. [Hint: Expected t->IsInitialized() == true, but received t->IsInitialized():0 != true:1.] (at /paddle/paddle/fluid/framework/operator.cc:2094) [operator < conv2d > error] tapendra@tapendra:~/Desktop/layout$ paddle version /home/tapendra/.local/bin/paddle: line 150: python: command not found PaddlePaddle 2.3.2, compiled with with_avx: ON with_gpu: OFF with_mkl: ON with_mkldnn: ON with_python: ON

TapendraBaduwal commented 1 year ago

@an1018 my train.json file data for model training {"images":[{"height":1650,"width":1275,"id":1,"file_name":"Tu_page-0003.jpg"},{"height":1754,"width":1240,"id":2,"file_name":"Tu_page-004.jpg"},{"height":1754,"width":1240,"id":3,"file_name":"Tu_page-005.jpg"},{"height":1650,"width":1275,"id":4,"file_name":"Tu_page-0001.jpg"},{"height":1650,"width":1275,"id":5,"file_name":"Tu_page-0002.jpg"}],"annotations":[{"iscrowd":0,"image_id":1,"bbox":[48.205128205128176,128.15384615384616,992.3076923076924,310.2564102564103],"segmentation":[[48,135,55,438,1040,433,1025,128]],"category_id":0,"id":1,"area":298076},{"iscrowd":0,"image_id":1,"bbox":[189.23076923076917,402.5128205128205,641.025641025641,446.15384615384613],"segmentation":[[189,430,199,840,830,848,814,402]],"category_id":1,"id":2,"area":269132},{"iscrowd":0,"image_id":2,"bbox":[112.67567567567573,193.40540540540542,427.027027027027,178.3783783783784],"segmentation":[[112,193,120,369,539,371,531,193]],"category_id":0,"id":3,"area":74149},{"iscrowd":0,"image_id":2,"bbox":[545.1081081081081,185.2972972972973,540.5405405405406,194.59459459459458],"segmentation":[[545,185,553,379,1082,374,1085,188]],"category_id":2,"id":4,"area":101968},{"iscrowd":0,"image_id":2,"bbox":[120.78378378378375,377.1891891891892,1002.7027027027027,475.6756756756757],"segmentation":[[120,390,145,852,1123,850,1099,377]],"category_id":0,"id":5,"area":457655},{"iscrowd":0,"image_id":2,"bbox":[258.62162162162167,793.4054054054054,737.8378378378377,237.83783783783792],"segmentation":[[258,823,266,1031,996,1023,966,793]],"category_id":1,"id":6,"area":157742},{"iscrowd":0,"image_id":2,"bbox":[172.1351351351351,1017.7297297297298,864.8648648648649,83.78378378378375],"segmentation":[[177,1020,172,1101,1037,1093,1034,1017]],"category_id":0,"id":7,"area":67461},{"iscrowd":0,"image_id":2,"bbox":[293.7567567567568,1077.1891891891892,718.9189189189188,224.32432432432438],"segmentation":[[293,1079,299,1301,1012,1293,1004,1077]],"category_id":1,"id":8,"area":155942},{"iscrowd":0,"image_id":2,"bbox":[153.21621621621625,1290.7027027027027,864.8648648648648,67.5675675675675],"segmentation":[[153,1290,158,1358,1018,1358,1018,1306]],"category_id":0,"id":9,"area":51241},{"iscrowd":0,"image_id":2,"bbox":[264.0270270270271,1358.2702702702702,754.054054054054,267.5675675675677],"segmentation":[[264,1360,277,1625,1012,1617,1018,1358]],"category_id":1,"id":10,"area":195226},{"iscrowd":0,"image_id":3,"bbox":[145.10810810810813,163.67567567567568,940.5405405405406,91.89189189189187],"segmentation":[[145,177,147,255,1085,255,1082,163]],"category_id":0,"id":11,"area":79861},{"iscrowd":0,"image_id":3,"bbox":[247.81081081081084,247.45945945945948,686.4864864864865,200.0],"segmentation":[[247,255,261,444,934,447,928,247]],"category_id":1,"id":12,"area":131771},{"iscrowd":0,"image_id":3,"bbox":[104.56756756756761,442.05405405405406,1008.108108108108,337.8378378378378],"segmentation":[[104,442,118,774,1112,779,1096,447]],"category_id":0,"id":13,"area":330105},{"iscrowd":0,"image_id":3,"bbox":[242.40540540540542,744.7567567567568,732.4324324324325,218.91891891891896],"segmentation":[[242,758,253,963,974,955,953,744]],"category_id":1,"id":14,"area":149225},{"iscrowd":0,"image_id":3,"bbox":[104.56756756756761,939.3513513513514,1016.2162162162161,354.05405405405406],"segmentation":[[104,942,109,1285,1120,1293,1112,939]],"category_id":0,"id":15,"area":351928},{"iscrowd":0,"image_id":4,"bbox":[35.38461538461536,105.07692307692307,479.48717948717956,158.97435897435898],"segmentation":[[35,105,35,264,514,261,514,105]],"category_id":0,"id":16,"area":75611},{"iscrowd":0,"image_id":4,"bbox":[514.8717948717949,105.07692307692307,615.3846153846152,179.48717948717947],"segmentation":[[514,105,1130,110,1122,284,520,264]],"category_id":2,"id":17,"area":101512},{"iscrowd":0,"image_id":4,"bbox":[30.256410256410277,294.8205128205128,1084.6153846153848,535.8974358974359],"segmentation":[[35,294,1114,307,1109,830,30,825]],"category_id":0,"id":18,"area":568852},{"iscrowd":0,"image_id":4,"bbox":[258.46153846153845,840.9743589743589,641.025641025641,358.9743589743591],"segmentation":[[258,840,268,1192,894,1199,899,843]],"category_id":1,"id":19,"area":224089},{"iscrowd":0,"image_id":4,"bbox":[86.66666666666663,1207.6410256410256,1002.5641025641027,97.43589743589746],"segmentation":[[86,1207,89,1302,1089,1305,1084,1223]],"category_id":0,"id":20,"area":88313},{"iscrowd":0,"image_id":4,"bbox":[158.46153846153845,1282.0,766.6666666666666,287.17948717948707],"segmentation":[[158,1282,171,1548,925,1569,922,1312]],"category_id":1,"id":21,"area":198303},{"iscrowd":0,"image_id":4,"bbox":[89.23076923076917,1548.6666666666665,1002.5641025641027,82.05128205128221],"segmentation":[[89,1548,89,1620,1091,1630,1089,1553]],"category_id":0,"id":22,"area":74444},{"iscrowd":0,"image_id":5,"bbox":[76.41025641025635,115.33333333333334,966.6666666666667,89.74358974358972],"segmentation":[[76,120,78,197,1043,205,1043,115]],"category_id":0,"id":23,"area":80447},{"iscrowd":0,"image_id":5,"bbox":[237.9487179487179,210.2051282051282,612.8205128205128,271.7948717948718],"segmentation":[[237,220,245,482,850,482,835,210]],"category_id":1,"id":24,"area":160401},{"iscrowd":0,"image_id":5,"bbox":[73.84615384615381,494.8205128205128,946.1538461538462,94.8717948717948],"segmentation":[[76,494,73,582,1020,589,1012,499]],"category_id":1,"id":25,"area":83228},{"iscrowd":0,"image_id":5,"bbox":[199.48717948717945,602.5128205128204,664.1025641025642,258.974358974359],"segmentation":[[199,605,204,851,863,861,861,602]],"category_id":1,"id":26,"area":166742},{"iscrowd":0,"image_id":5,"bbox":[58.46153846153845,879.4358974358975,969.2307692307692,123.07692307692298],"segmentation":[[61,879,58,1002,1025,1002,1027,882]],"category_id":0,"id":27,"area":117738},{"iscrowd":0,"image_id":5,"bbox":[263.58974358974353,1002.5128205128204,551.2820512820514,266.66666666666663],"segmentation":[[263,1005,271,1269,814,1264,814,1002]],"category_id":1,"id":28,"area":143892},{"iscrowd":0,"image_id":5,"bbox":[68.71794871794873,1256.3589743589744,964.102564102564,258.9743589743589],"segmentation":[[68,1276,71,1502,1032,1515,1030,1256]],"category_id":1,"id":29,"area":232998}],"categories":[{"id":0,"name":"text","supercategory":"text"},{"id":1,"name":"figure","supercategory":"figure"},{"id":2,"name":"table","supercategory":"table"}]}

TapendraBaduwal commented 1 year ago

@an1018 my single image json file formate from labelme tool is

{ "version": "3.16.7", "flags": {}, "shapes": [ { "label": "text", "line_color": null, "fill_color": null, "points": [ [ 35.38461538461536, 105.07692307692307 ], [ 35.38461538461536, 264.05128205128204 ], [ 514.8717948717949, 261.4871794871795 ], [ 514.8717948717949, 105.07692307692307 ] ], "shape_type": "polygon", "flags": {} }, { "label": "table", "line_color": null, "fill_color": null, "points": [ [ 514.8717948717949, 105.07692307692307 ], [ 1130.2564102564102, 110.2051282051282 ], [ 1122.5641025641025, 284.56410256410254 ], [ 520.0, 264.05128205128204 ] ], "shape_type": "polygon", "flags": {} }, { "label": "text", "line_color": null, "fill_color": null, "points": [ [ 35.38461538461536, 294.8205128205128 ], [ 1114.871794871795, 307.64102564102564 ], [ 1109.7435897435896, 830.7179487179487 ], [ 30.256410256410277, 825.5897435897435 ] ], "shape_type": "polygon", "flags": {} }, { "label": "figure", "line_color": null, "fill_color": null, "points": [ [ 258.46153846153845, 840.9743589743589 ], [ 268.71794871794873, 1192.2564102564102 ], [ 894.3589743589744, 1199.948717948718 ], [ 899.4871794871794, 843.5384615384615 ] ], "shape_type": "polygon", "flags": {} }, { "label": "text", "line_color": null, "fill_color": null, "points": [ [ 86.66666666666663, 1207.6410256410256 ], [ 89.23076923076917, 1302.5128205128206 ], [ 1089.2307692307693, 1305.076923076923 ], [ 1084.102564102564, 1223.0256410256409 ] ], "shape_type": "polygon", "flags": {} }, { "label": "figure", "line_color": null, "fill_color": null, "points": [ [ 158.46153846153845, 1282.0 ], [ 171.28205128205127, 1548.6666666666665 ], [ 925.1282051282051, 1569.179487179487 ], [ 922.5641025641025, 1312.7692307692307 ] ], "shape_type": "polygon", "flags": {} }, { "label": "text", "line_color": null, "fill_color": null, "points": [ [ 89.23076923076917, 1548.6666666666665 ], [ 89.23076923076917, 1620.4615384615383 ], [ 1091.7948717948718, 1630.7179487179487 ], [ 1089.2307692307693, 1553.7948717948718 ] ], "shape_type": "polygon", "flags": {} } ], "lineColor": [ 0, 255, 0, 128 ], "fillColor": [ 255, 0, 0, 128 ], "imagePath": "Tu_page-0001.jpg", "Someting long key here" "imageHeight": 1650, "imageWidth": 1275 }

an1018 commented 1 year ago

Is it normal to train with your own dataset? and did you test your model before export?

TapendraBaduwal commented 1 year ago

@an1018 in output dir below image save while infer layout

TapendraBaduwal commented 1 year ago

@an1018 while training in last i got

Accumulating evaluation results... DONE (t=0.02s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000 [01/16 12:33:12] ppdet.engine INFO: Total sample number: 3, average FPS: 1.5615825421222556 [01/16 12:33:12] ppdet.engine INFO: Best test bbox ap is 0.000. [01/16 12:33:12] ppdet.utils.checkpoint INFO: Save checkpoint: output/picodet_lcnet_x1_0_layout

1061302569 commented 1 year ago

您可以尝试安装 PaddlePaddle 2.3

我也是报这个错误,版本2.4换成2.3还是不行

1061302569 commented 1 year ago

用自己的数据集训练正常吗?你在出口前测试过你的模型吗?

#测试版面分析结果
!python3 PaddleDetection/tools/infer.py \
    -c ./picodet_lcnet_x1_0_layout.yml \
    -o weights='output/picodet_lcnet_x1_0_layout/best_model.pdparams' \
    --infer_img='data/images/123.png' \
    --output_dir=output_dir/ \
    --draw_threshold=0.5 

029

1061302569 commented 1 year ago

用自己的数据集训练正常吗?你在出门前测试过你的模型吗?

#生成训练集合测试集
!python label_studio.py \
    --label_studio_file ./datasets/label.json \
    --save_dir ./datasets \
    --splits 0.8 0.2 0\
    --task_type ext \
    --layout_analysis True

我在layout_analysis True用我自己训练版面模型时报了 ValueError: (InvalidArgument) The conv2d Op's Input Variable Input contains uninitialized Tensor. [Hint: Expected t->IsInitialized() == true, but received t->IsInitialized():0 != true:1.] (at /paddle/paddle/fluid/framework/operator.cc:2411) [operator < conv2d > error]

TapendraBaduwal commented 1 year ago

@1061302569 yes the version 2.4 is changed to 2.3 still not solved what is the minimum number of data required to train a layout model currently i am doing finetune on 10 dataset "Is it a problem due to less number of of data? in how many data you train your model? layout (1)

1061302569 commented 1 year ago

@1061302569 yes the version 2.4 is changed to 2.3 still not solved what is the minimum number of data required to train a layout model currently i am doing finetune on 10 dataset "Is it a problem due to less number of of data? in how many data you train your model? layout (1)

It should not be the reason for the small data set. At the beginning, I also used 10 pictures for training. Now I increase the data set to 100 pictures. This error is still the same.

TapendraBaduwal commented 1 year ago

@1061302569 @an1018 "Is it a problem due to image size? my images size is like "imageHeight": 1650, "imageWidth": 1275 ? In .yml file image_shape: [1, 3, 800, 608] is mention. Shall we maintain this size while training?

TapendraBaduwal commented 1 year ago

@1061302569 Did you solve this issue?

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.