PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.54k stars 2.85k forks source link

FatalError: Build TensorRT cuda engine failed! #6270

Open imsotable opened 2 years ago

imsotable commented 2 years ago

问题确认 Search before asking

请提出你的问题 Please ask your question

Thanks for your excellent work! when I try FP16 inference,I run: python3 deploy/python/det_keypoint_unite_infer.py --det_model_dir=output_inference/picodet_s_320_pedestrian --keypoint_model_dir=output_inference/tinypose_128x96 --video_file={/home/Packages/PaddleDetection-release-2.4/mov.mp4} --device=GPU --run_mode=trt_fp16 but I get: `SystemError: C++ Traceback (most recent call last): 0 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&) 1 std::unique_ptr<paddle::PaddlePredictor, std::default_delete > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&) 2 paddle::AnalysisPredictor::Init(std::shared_ptr const&, std::shared_ptr const&) 3 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr const&) 4 paddle::AnalysisPredictor::OptimizeInferenceProgram() 5 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument) 6 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument) 7 paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_delete >) 8 paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph) const 9 paddle::inference::analysis::TensorRtSubgraphPass::ApplyImpl(paddle::framework::ir::Graph) const 10 paddle::inference::analysis::TensorRtSubgraphPass::CreateTensorRTOp(paddle::framework::ir::Node, paddle::framework::ir::Graph, std::vector<std::string, std::allocator > const&, std::vector<std::string, std::allocator >) const 11 paddle::inference::tensorrt::OpConverter::ConvertBlockToTRTEngine(paddle::framework::BlockDesc, paddle::framework::Scope const&, std::vector<std::string, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<std::string, std::allocator > const&, paddle::inference::tensorrt::TensorRTEngine) 12 paddle::inference::tensorrt::TensorRTEngine::FreezeNetwork() 13 paddle::platform::EnforceNotMet::EnforceNotMet(paddle::platform::ErrorSummary const&, char const, int) 14 paddle::platform::GetCurrentTraceBackString[abi:cxx11]()


Error Message Summary:

FatalError: Build TensorRT cuda engine failed! Please recheck you configurations related to paddle-TensorRT. [Hint: inferengine should not be null.] (at /home/shine/Packages/Paddle/paddle/fluid/inference/tensorrt/engine.cc:244) ` I have paddledet 2.4.0 paddlepaddle-gpu 0.0.0 thanks for your reply.

jerrywgz commented 2 years ago

It seems that TRT lib is not loaded correctly. You need to install the paddlepaddle with TRT first and the whl list can be found here https://paddleinference.paddlepaddle.org.cn/user_guides/download_lib.html#python

imsotable commented 2 years ago

It seems that TRT lib is not loaded correctly. You need to install the paddlepaddle with TRT first and the whl list can be found here https://paddleinference.paddlepaddle.org.cn/user_guides/download_lib.html#python

thanks for your reply! I install it as you said,but question still exist: FatalError: Build TensorRT cuda engine failed! Please recheck you configurations related to paddle-TensorRT. [Hint: inferengine should not be null.] (at /home/paddle/data/wangye19/tag_release/Paddle/paddle/fluid/inference/tensorrt/engine.cc:267)