Closed duongvvuet closed 3 years ago
可以先看下inference model路径配置是否正确,建议使用 2.0.0b0的Paddle和预测库
@littletomatodonkey I built Paddle C ++ inference CPU follow https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/cpp_infer/readme_en.md
Everything is fine until I run the demo sh tools/run.sh
Are the detection, direction classifier and recognition models' path right? You should download the models and fix them as your own path.
@littletomatodonkey I'm using absolute path, but ... i still have the same problem =======Paddle OCR inference config====== char_list_file : /home/duongvu/Tools/PaddleOCR/ppocr/utils/ppocr_keys_v1.txt cls_model_dir : /home/duongvu/Tools/PaddleOCR/deploy/cpp_infer/inference/cls cls_thresh : 0.9 cpu_math_library_num_threads : 8 det_db_box_thresh : 0.5 det_db_thresh : 0.3 det_db_unclip_ratio : 1.6 det_model_dir : /home/duongvu/Tools/PaddleOCR/deploy/cpp_infer/inference/det_db gpu_id : 0 gpu_mem : 4000 max_side_len : 960 rec_model_dir : /home/duongvu/Tools/PaddleOCR/deploy/cpp_infer/inference/rec_crnn use_angle_cls : 0 use_gpu : 0 use_mkldnn : 1 use_zero_copy_run : 1 visualize : 1
You can try using Paddle2.0 and latest Paddle2.0-rc inference library for test, of course, use_mkldnn can be disabled if there are also problems.
=======Paddle OCR inference config====== char_list_file : ../../ppocr/utils/ppocr_keys_v1.txt cls_model_dir : ./inference/cls cls_thresh : 0.9 cpu_math_library_num_threads : 8 det_db_box_thresh : 0.5 det_db_thresh : 0.3 det_db_unclip_ratio : 1.6 det_model_dir : ./inference/det_db gpu_id : 0 gpu_mem : 4000 max_side_len : 960 rec_model_dir : ./inference/rec_crnn use_angle_cls : 0 use_gpu : 0 use_mkldnn : 1 use_zero_copy_run : 1 visualize : 1 =======End of Paddle OCR inference config====== --- fused 0 scale with matmul --- Fused 0 ReshapeTransposeMatmulMkldnn patterns --- Fused 0 ReshapeTransposeMatmulMkldnn patterns with transpose's xshape --- Fused 0 ReshapeTransposeMatmulMkldnn patterns with reshape's xshape --- Fused 0 ReshapeTransposeMatmulMkldnn patterns with reshape's xshape with transpose's xshape --- Fused 0 MatmulTransposeReshape patterns --- fused 0 scale with matmul --- Fused 0 ReshapeTransposeMatmulMkldnn patterns --- Fused 0 ReshapeTransposeMatmulMkldnn patterns with transpose's xshape --- Fused 0 ReshapeTransposeMatmulMkldnn patterns with reshape's xshape --- Fused 0 ReshapeTransposeMatmulMkldnn patterns with reshape's xshape with transpose's xshape --- Fused 0 MatmulTransposeReshape patterns terminate called after throwing an instance of 'paddle::platform::EnforceNotMet' what():
Compile Traceback (most recent call last): File "/home/duongvu/anaconda3/envs/OCR/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2610, in append_op attrs=kwargs.get("attrs", None))
C++ Traceback (most recent call last):
0 paddle::AnalysisPredictor::ZeroCopyRun() 1 paddle::framework::NaiveExecutor::Run() 2 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&) 3 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const 4 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext) const 5 paddle::framework::OperatorWithKernel::PrepareData(paddle::framework::Scope const&, paddle::framework::OpKernelType const&, std::vector<std::string, std::allocator > , paddle::framework::RuntimeContext) const
6 paddle::framework::TransformData(paddle::framework::OpKernelType const&, paddle::framework::OpKernelType const&, paddle::framework::Tensor const&, paddle::framework::Tensor)
7 paddle::framework::TransDataLayoutFromMKLDNN(paddle::framework::OpKernelType const&, paddle::framework::OpKernelType const&, paddle::framework::Tensor const&, paddle::framework::Tensor)
8 paddle::framework::innerTransDataLayoutFromMKLDNN(paddle::framework::DataLayout, paddle::framework::DataLayout, paddle::framework::Tensor const&, paddle::framework::Tensor, paddle::platform::Place)
9 paddle::framework::Tensor::mutable_data(paddle::platform::Place const&, paddle::framework::proto::VarType_Type, unsigned long)
10 paddle::memory::AllocShared(paddle::platform::Place const&, unsigned long)
11 paddle::memory::allocation::AllocatorFacade::AllocShared(paddle::platform::Place const&, unsigned long)
12 paddle::memory::allocation::AllocatorFacade::Alloc(paddle::platform::Place const&, unsigned long)
13 paddle::memory::allocation::NaiveBestFitAllocator::AllocateImpl(unsigned long)
14 void paddle::memory::legacy::Alloc(paddle::platform::CPUPlace const&, unsigned long)
15 paddle::memory::detail::BuddyAllocator::Alloc(unsigned long)
16 paddle::memory::detail::BuddyAllocator::SplitToAlloc(std::_Rb_tree_const_iterator<std::tuple<unsigned long, unsigned long, void > >, unsigned long)
17 paddle::memory::detail::MemoryBlock::Split(paddle::memory::detail::MetadataCache*, unsigned long)
18 paddle::platform::GetCurrentTraceBackString[abi:cxx11]()
Error Message Summary:
InvalidArgumentError: The size of memory block (0) to split is not larger than size of request memory (1609728) [Hint: Expected desc->total_size >= size, but received desc->total_size:0 < size:1609728.] (at /home/duongvu/Tools/Paddle/paddle/fluid/memory/detail/memory_block.cc:46) [operator < hard_swish > error] Aborted (core dumped)
Why? How to fix it?