PaddlePaddle / PaddleOCR

Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)
https://paddlepaddle.github.io/PaddleOCR/
Apache License 2.0
44.09k stars 7.81k forks source link

在centos7下使用cpu训练模型报错,说文件找不到,可是文件路径确实存在的 #1574

Closed yiyi99 closed 3 years ago

yiyi99 commented 3 years ago

[root@localhost PaddleOCR]# python3 tools/train.py -c configs/rec/rec_icdar15_train.yml [2020/12/24 17:35:55] root INFO: Architecture : [2020/12/24 17:35:55] root INFO: Backbone : [2020/12/24 17:35:55] root INFO: model_name : large [2020/12/24 17:35:55] root INFO: name : MobileNetV3 [2020/12/24 17:35:55] root INFO: scale : 0.5 [2020/12/24 17:35:55] root INFO: Head : [2020/12/24 17:35:55] root INFO: fc_decay : 0 [2020/12/24 17:35:55] root INFO: name : CTCHead [2020/12/24 17:35:55] root INFO: Neck : [2020/12/24 17:35:55] root INFO: encoder_type : rnn [2020/12/24 17:35:55] root INFO: hidden_size : 96 [2020/12/24 17:35:55] root INFO: name : SequenceEncoder [2020/12/24 17:35:55] root INFO: Transform : None [2020/12/24 17:35:55] root INFO: algorithm : CRNN [2020/12/24 17:35:55] root INFO: model_type : rec [2020/12/24 17:35:55] root INFO: Eval : [2020/12/24 17:35:55] root INFO: dataset : [2020/12/24 17:35:55] root INFO: data_dir : ./train_datas/ [2020/12/24 17:35:55] root INFO: label_file_list : ['./train_datas/Label_new.txt'] [2020/12/24 17:35:55] root INFO: name : SimpleDataSet [2020/12/24 17:35:55] root INFO: transforms : [2020/12/24 17:35:55] root INFO: DecodeImage : [2020/12/24 17:35:55] root INFO: channel_first : False [2020/12/24 17:35:55] root INFO: img_mode : BGR [2020/12/24 17:35:55] root INFO: CTCLabelEncode : None [2020/12/24 17:35:55] root INFO: RecResizeImg : [2020/12/24 17:35:55] root INFO: image_shape : [3, 32, 100] [2020/12/24 17:35:55] root INFO: KeepKeys : [2020/12/24 17:35:55] root INFO: keep_keys : ['image', 'label', 'length'] [2020/12/24 17:35:55] root INFO: loader : [2020/12/24 17:35:55] root INFO: batch_size_per_card : 256 [2020/12/24 17:35:55] root INFO: drop_last : False [2020/12/24 17:35:55] root INFO: num_workers : 1 [2020/12/24 17:35:55] root INFO: shuffle : False [2020/12/24 17:35:55] root INFO: Global : [2020/12/24 17:35:55] root INFO: cal_metric_during_train : True [2020/12/24 17:35:55] root INFO: character_dict_path : ppocr/utils/ic15_dict.txt [2020/12/24 17:35:55] root INFO: character_type : ch [2020/12/24 17:35:55] root INFO: checkpoints : None [2020/12/24 17:35:55] root INFO: debug : False [2020/12/24 17:35:55] root INFO: distributed : False [2020/12/24 17:35:55] root INFO: epoch_num : 72 [2020/12/24 17:35:55] root INFO: eval_batch_step : [0, 2000] [2020/12/24 17:35:55] root INFO: infer_img : doc/imgs_words_en/word_10.png [2020/12/24 17:35:55] root INFO: infer_mode : False [2020/12/24 17:35:55] root INFO: log_smooth_window : 20 [2020/12/24 17:35:55] root INFO: max_text_length : 25 [2020/12/24 17:35:55] root INFO: pretrained_model : None [2020/12/24 17:35:55] root INFO: print_batch_step : 10 [2020/12/24 17:35:55] root INFO: save_epoch_step : 3 [2020/12/24 17:35:55] root INFO: save_inference_dir : None [2020/12/24 17:35:55] root INFO: save_model_dir : ./output/rec/ic15/ [2020/12/24 17:35:55] root INFO: use_gpu : False [2020/12/24 17:35:55] root INFO: use_space_char : False [2020/12/24 17:35:55] root INFO: use_visualdl : False [2020/12/24 17:35:55] root INFO: Loss : [2020/12/24 17:35:55] root INFO: name : CTCLoss [2020/12/24 17:35:55] root INFO: Metric : [2020/12/24 17:35:55] root INFO: main_indicator : acc [2020/12/24 17:35:55] root INFO: name : RecMetric [2020/12/24 17:35:55] root INFO: Optimizer : [2020/12/24 17:35:55] root INFO: beta1 : 0.9 [2020/12/24 17:35:55] root INFO: beta2 : 0.999 [2020/12/24 17:35:55] root INFO: lr : [2020/12/24 17:35:55] root INFO: learning_rate : 0.0005 [2020/12/24 17:35:55] root INFO: name : Adam [2020/12/24 17:35:55] root INFO: regularizer : [2020/12/24 17:35:55] root INFO: factor : 0 [2020/12/24 17:35:55] root INFO: name : L2 [2020/12/24 17:35:55] root INFO: PostProcess : [2020/12/24 17:35:55] root INFO: name : CTCLabelDecode [2020/12/24 17:35:55] root INFO: Train : [2020/12/24 17:35:55] root INFO: dataset : [2020/12/24 17:35:55] root INFO: data_dir : ./train_datas/ [2020/12/24 17:35:55] root INFO: label_file_list : ['./train_datas/Label_new.txt'] [2020/12/24 17:35:55] root INFO: name : SimpleDataSet [2020/12/24 17:35:55] root INFO: transforms : [2020/12/24 17:35:55] root INFO: DecodeImage : [2020/12/24 17:35:55] root INFO: channel_first : False [2020/12/24 17:35:55] root INFO: img_mode : BGR [2020/12/24 17:35:55] root INFO: CTCLabelEncode : None [2020/12/24 17:35:55] root INFO: RecResizeImg : [2020/12/24 17:35:55] root INFO: image_shape : [3, 32, 100] [2020/12/24 17:35:55] root INFO: KeepKeys : [2020/12/24 17:35:55] root INFO: keep_keys : ['image', 'label', 'length'] [2020/12/24 17:35:55] root INFO: loader : [2020/12/24 17:35:55] root INFO: batch_size_per_card : 256 [2020/12/24 17:35:55] root INFO: drop_last : True [2020/12/24 17:35:55] root INFO: num_workers : 1 [2020/12/24 17:35:55] root INFO: shuffle : True [2020/12/24 17:35:55] root INFO: train with paddle 2.0.0-rc1 and device CPUPlace [2020/12/24 17:35:55] root INFO: Initialize indexs of datasets:['./train_datas/Label_new.txt'] [2020/12/24 17:35:55] root INFO: Initialize indexs of datasets:['./train_datas/Label_new.txt'] [2020/12/24 17:35:55] root INFO: train from scratch [2020/12/24 17:35:55] root INFO: train dataloader has 1 iters, valid dataloader has 2 iters [2020/12/24 17:35:55] root INFO: During the training process, after the 0th iteration, an evaluation is run every 2000 iterations [2020/12/24 17:35:55] root ERROR: When parsing line t000.png [{"transcription": "白细胞", "points": [[83.0, 5.0], [201.0, 7.0], [200.0, 49.0], [82.0, 47.0]], "difficult": false}, {"transcription": "WBC", "points": [[405.0, 7.0], [485.0, 9.0], [484.0, 49.0], [404.0, 47.0]], "difficult": false}, {"transcription": "7.87", "points": [[559.0, 9.0], [626.0, 9.0], [626.0, 53.0], [559.0, 53.0]], "difficult": false}, {"transcription": "10~9/L", "points": [[689.0, 14.0], [802.0, 19.0], [801.0, 56.0], [688.0, 51.0]], "difficult": false}, {"transcription": "3.5-9.5", "points": [[815.0, 17.0], [912.0, 17.0], [912.0, 54.0], [815.0, 54.0]], "difficult": false}] , error happened with msg: ./train_datas/t000.png does not exist! [2020/12/24 17:35:55] root ERROR: When parsing line t000.png [{"transcription": "白细胞", "points": [[83.0, 5.0], [201.0, 7.0], [200.0, 49.0], [82.0, 47.0]], "difficult": false}, {"transcription": "WBC", "points": [[405.0, 7.0], [485.0, 9.0], [484.0, 49.0], [404.0, 47.0]], "difficult": false}, {"transcription": "7.87", "points": [[559.0, 9.0], [626.0, 9.0], [626.0, 53.0], [559.0, 53.0]], "difficult": false}, {"transcription": "10~9/L", "points": [[689.0, 14.0], [802.0, 19.0], [801.0, 56.0], [688.0, 51.0]], "difficult": false}, {"transcription": "3.5-9.5", "points": [[815.0, 17.0], [912.0, 17.0], [912.0, 54.0], [815.0, 54.0]], "difficult": false}] , error happened with msg: ./train_datas/t000.png does not exist! [2020/12/24 17:35:55] root ERROR: When parsing line t194.png [{"transcription": "粒细胞百分比(Gran%)", "points": [[1.0, 3.0], [191.0, 3.0], [191.0, 29.0], [1.0, 29.0]], "difficult": false}, {"transcription": "58.3", "points": [[299.0, 3.0], [346.0, 3.0], [346.0, 30.0], [299.0, 30.0]], "difficult": false}, {"transcription": "%", "points": [[355.0, 4.0], [381.0, 4.0], [381.0, 30.0], [355.0, 30.0]], "difficult": false}, {"transcription": "50.0--70.0", "points": [[464.0, 4.0], [572.0, 4.0], [572.0, 31.0], [464.0, 31.0]], "difficult": false}] , error happened with msg: maximum recursion depth exceeded in comparison Fatal Python error: Cannot recover from stack overflow.

Current thread 0x00007fc382731740 (most recent call first): File "/PaddleOCR/ppocr/data/imaug/label_ops.py", line 140 in encode File "/PaddleOCR/ppocr/data/imaug/label_ops.py", line 171 in call File "/PaddleOCR/ppocr/data/imaug/init.py", line 38 in transform File "/PaddleOCR/ppocr/data/simple_dataset.py", line 83 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem File "/PaddleOCR/ppocr/data/simple_dataset.py", line 90 in getitem ...


C++ Traceback (most recent call last):

0 paddle::framework::SignalHandle(char const*, int) 1 paddle::platform::GetCurrentTraceBackString()


Error Message Summary:

FatalError: Process abort signal is detected by the operating system. [TimeInfo: Aborted at 1608802555 (unix time) try "date -d @1608802555" if you are using GNU date ] [SignalInfo: SIGABRT (@0x8b1) received by PID 2225 (TID 0x7fc382731740) from PID 2225 ]

ERROR:root:DataLoader reader thread raised an exception! Traceback (most recent call last): File "tools/train.py", line 114, in main(config, device, logger, vdl_writer) File "tools/train.py", line 91, in main Exception in thread Thread-1: Traceback (most recent call last): File "/usr/local/lib64/python3.6/site-packages/paddle/fluid/dataloader/dataloader_iter.py", line 676, in _get_data data = self._data_queue.get(timeout=self._timeout) File "/usr/lib64/python3.6/multiprocessing/queues.py", line 105, in get raise Empty queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/usr/lib64/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib64/python3.6/site-packages/paddle/fluid/dataloader/dataloader_iter.py", line 608, in _thread_loop batch = self._get_data() File "/usr/local/lib64/python3.6/site-packages/paddle/fluid/dataloader/dataloader_iter.py", line 692, in _get_data "pids: {}".format(len(failed_workers), pids)) RuntimeError: DataLoader 1 workers exit unexpectedly, pids: 2225

eval_class, pre_best_model_dict, logger, vdl_writer)

File "/PaddleOCR/tools/program.py", line 191, in train for idx, batch in enumerate(train_dataloader): File "/usr/local/lib64/python3.6/site-packages/paddle/fluid/dataloader/dataloader_iter.py", line 771, in next data = self._reader.read_next_varlist() SystemError: (Fatal) Blocking queue is killed because the data reader raises an exception. [Hint: Expected killed != true, but received killed_:1 == true:1.] (at /paddle/paddle/fluid/operators/reader/blocking_queue.h:154) [Hint: If you need C++ stacktraces for debugging, please set FLAGS_call_stack_level=2.] 执行命令: [root@localhost PaddleOCR]# ls ./train_datas/t000.png ./train_datas/t000.png [root@localhost PaddleOCR]# Label_new.txt文件是在windows下使用PPOCRLabel进行标注生成的

yiyi99 commented 3 years ago

image

MissPenguin commented 3 years ago

你的命令是训练识别,但你的数据格式是检测的格式,识别的数据组织方式可以参考文档:https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/recognition.md#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%95%B0%E6%8D%AE%E9%9B%86

yiyi99 commented 3 years ago

你的命令是训练识别,但你的数据格式是检测的格式,识别的数据组织方式可以参考文档:https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/recognition.md#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%95%B0%E6%8D%AE%E9%9B%86

谢谢,请问如果是在paddleocr自身模型的基础上增强某些字符的识别率,不修改字典,需要训练检测吗?还是只需要训练识别就可以了?还有就是图片中的文字如果中间有空格的话,数据文件里面需要加上空格吗?

LDOUBLEV commented 3 years ago

谢谢,请问如果是在paddleocr自身模型的基础上增强某些字符的识别率,不修改字典,需要训练检测吗?

检测效果还行的话,不用训练检测

还有就是图片中的文字如果中间有空格的话,数据文件里面需要加上空格吗?

如果空格也是要识别的单词,最好加上

yiyi99 commented 3 years ago

谢谢,请问如果是在paddleocr自身模型的基础上增强某些字符的识别率,不修改字典,需要训练检测吗?

检测效果还行的话,不用训练检测

还有就是图片中的文字如果中间有空格的话,数据文件里面需要加上空格吗?

如果空格也是要识别的单词,最好加上

谢谢