PaddlePaddle / PaddleOCR

Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)
https://paddlepaddle.github.io/PaddleOCR/
Apache License 2.0
43.92k stars 7.8k forks source link

CPU inference failing #11560

Closed juvebogdan closed 4 months ago

juvebogdan commented 9 months ago

Hello,

I used the latin PP-OCRv3 model for recognition and fine-tuned it to my data and exported the model and performed the inference. All that worked fine. Then i tried to use the model and perform the inference on the CPU. this is the everything i ran:

!pip3 install --upgrade pip
!python3 -m pip install paddlepaddle==2.6.0 -i https://mirror.baidu.com/pypi/simple
!git clone https://github.com/PaddlePaddle/PaddleOCR
%cd PaddleOCR
!pip3 install -r requirements.txt
!git checkout remotes/origin/dygraph
!wget https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/Multilingual_PP-OCRv3_det_infer.tar
!tar -xvf Multilingual_PP-OCRv3_det_infer.tar
%cd /content/
!tar -xvf /content/model.tar
%cd /content/PaddleOCR/

model.tar is my fine-tuned rec model. After this i ran:

!python3 tools/infer/predict_system.py --image_dir="/content/licmaj1jpg.jpg" \
--det_model_dir="/content/PaddleOCR/Multilingual_PP-OCRv3_det_infer" --rec_model_dir="/content/content/inference_new" \
--rec_char_dict_path="/content/PaddleOCR/ppocr/utils/dict/latin_dict.txt" --use_gpu=False --use_mp=True --total_process_num=2

and this was working on GPU but on CPU i get:

--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0   paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
1   std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
2   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
3   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
4   paddle::AnalysisPredictor::OptimizeInferenceProgram()
5   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
7   paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_delete<paddle::framework::ir::Graph> >)
8   paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph*) const
9   paddle::framework::ir::SelfAttentionFusePass::ApplyImpl(paddle::framework::ir::Graph*) const
10  paddle::framework::ir::GraphPatternDetector::operator()(paddle::framework::ir::Graph*, std::function<void (std::map<paddle::framework::ir::PDNode*, paddle::framework::ir::Node*, paddle::framework::ir::GraphPatternDetector::PDNodeCompare, std::allocator<std::pair<paddle::framework::ir::PDNode* const, paddle::framework::ir::Node*> > > const&, paddle::framework::ir::Graph*)>)

----------------------
Error Message Summary:
----------------------
FatalError: `Illegal instruction` is detected by the operating system.
  [TimeInfo: *** Aborted at 1706704484 (unix time) try "date -d @1706704484" if you are using GNU date ***]
  [SignalInfo: *** SIGILL (@0x7ced45c1e86a) received by PID 1449 (TID 0x7ced5c149000) from PID 1170335850 ***]

--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0   paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
1   std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
2   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
3   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
4   paddle::AnalysisPredictor::OptimizeInferenceProgram()
5   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
7   paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_delete<paddle::framework::ir::Graph> >)
8   paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph*) const
9   paddle::framework::ir::SelfAttentionFusePass::ApplyImpl(paddle::framework::ir::Graph*) const
10  paddle::framework::ir::GraphPatternDetector::operator()(paddle::framework::ir::Graph*, std::function<void (std::map<paddle::framework::ir::PDNode*, paddle::framework::ir::Node*, paddle::framework::ir::GraphPatternDetector::PDNodeCompare, std::allocator<std::pair<paddle::framework::ir::PDNode* const, paddle::framework::ir::Node*> > > const&, paddle::framework::ir::Graph*)>)

----------------------
Error Message Summary:
----------------------
FatalError: `Illegal instruction` is detected by the operating system.
  [TimeInfo: *** Aborted at 1706704484 (unix time) try "date -d @1706704484" if you are using GNU date ***]
  [SignalInfo: *** SIGILL (@0x78abaee1e86a) received by PID 1450 (TID 0x78abc5400000) from PID 18446744072348625002 ***]

  I am running this in the Google Colab. If i use the pretrained model like this:
!wget https://paddleocr.bj.bcebos.com/PP-OCRv3/multilingual/latin_PP-OCRv3_rec_infer.tar
!tar -xvf latin_PP-OCRv3_rec_infer.tar

!python3 tools/infer/predict_system.py --image_dir="/content/licmaj1jpg.jpg" \
--det_model_dir="/content/PaddleOCR/Multilingual_PP-OCRv3_det_infer" --rec_model_dir="/content/PaddleOCR/latin_PP-OCRv3_rec_infer" \
--rec_char_dict_path="/content/PaddleOCR/ppocr/utils/dict/latin_dict.txt" --use_gpu=False --use_mp=True --total_process_num=2

this works on CPU as well. Can you help me why does my fine tuned model is not working? Is there a specific way to export it so it can be ran on CPU as well

I exported it like this:

!python3 tools/export_model.py -c /content/PaddleOCR/latin_PP-OCRv3_rec.yml -o Global.pretrained_model=/content/PaddleOCR/output/v3_latin_mobile/latest Global.save_inference_dir=/content/inference_new

PPZPPZ commented 9 months ago

最近按照官方教程在ubuntu上进行快速推理也遇到这个问题,而且在两台设备上都复现了

Amine-bc commented 8 months ago

I am having the same issue working with Arch linux: I had the same error, when executing the PPstructure code with my CPU. Here is the code:

    table_engine = PPStructure(show_log=True, image_orientation=True)
    img = cv2.imread(img_path)
    result = table_engine(img)
    save_folder = './saveFolder'
    img_path = './img.jpeg'
    save_structure_res(result, save_folder,os.path.basename(img_path).split('.')[0])

And this is the line causing the problem:

    table_engine = PPStructure(show_log=True, image_orientation=True)

I knew that after keeping it alone in my code and I had that error also.

I hope anyone can help, since everything was working then suddenly it stopped working ! Thank you, and keep doing such a great work with PaddleOCR.

SWHL commented 4 months ago

This is caused by PaddlePaddle. Please try to check whether PaddlePaddle is installed successfully.

>> import paddle 
>> paddle.utils.run_check()