Closed juvebogdan closed 4 months ago
最近按照官方教程在ubuntu上进行快速推理也遇到这个问题,而且在两台设备上都复现了
I am having the same issue working with Arch linux: I had the same error, when executing the PPstructure code with my CPU. Here is the code:
table_engine = PPStructure(show_log=True, image_orientation=True)
img = cv2.imread(img_path)
result = table_engine(img)
save_folder = './saveFolder'
img_path = './img.jpeg'
save_structure_res(result, save_folder,os.path.basename(img_path).split('.')[0])
And this is the line causing the problem:
table_engine = PPStructure(show_log=True, image_orientation=True)
I knew that after keeping it alone in my code and I had that error also.
I hope anyone can help, since everything was working then suddenly it stopped working ! Thank you, and keep doing such a great work with PaddleOCR.
This is caused by PaddlePaddle. Please try to check whether PaddlePaddle is installed successfully.
>> import paddle
>> paddle.utils.run_check()
Hello,
I used the latin PP-OCRv3 model for recognition and fine-tuned it to my data and exported the model and performed the inference. All that worked fine. Then i tried to use the model and perform the inference on the CPU. this is the everything i ran:
this works on CPU as well. Can you help me why does my fine tuned model is not working? Is there a specific way to export it so it can be ran on CPU as well
I exported it like this:
!python3 tools/export_model.py -c /content/PaddleOCR/latin_PP-OCRv3_rec.yml -o Global.pretrained_model=/content/PaddleOCR/output/v3_latin_mobile/latest Global.save_inference_dir=/content/inference_new