Open hezhi1220 opened 1 year ago
设置export GLOG_v=5
,再运行下看看具体的报错是什么?
Multi-Object Tracking enabled Vehicle Plate Recognition enabled Vehicle Attribute Recognition enabled DET model dir: run/infer_model/mot_ppyoloe_s_36e_ppvehicle/ MOT model dir: run/infer_model/mot_ppyoloe_s_36e_ppvehicle/ det_model_dir model dir: run/infer_model/ch_PP-OCRv3_det_infer/ rec_model_dir model dir: run/infer_model/ch_PP-OCRv3_rec_infer/ VEHICLE_ATTR model dir: run/infer_model/vehicle_attribute_model/ E0119 13:18:42.726099 10018 helper.h:111] 3: [optimizationProfile.cpp::setDimensions::128] Error Code 3: Internal Error (Parameter check failed at: runtime/api/optimizationProfile.cpp::setDimensions::128, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; }) ) E0119 13:18:42.726310 10018 helper.h:111] 3: [builderConfig.cpp::addOptimizationProfile::301] Error Code 3: Internal Error (Parameter check failed at: optimizer/api/builderConfig.cpp::addOptimizationProfile::301, condition: profile->isValid() ) E0119 13:18:42.731899 10018 helper.h:111] 4: [network.cpp::validate::2684] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.) E0119 13:18:42.732045 10018 helper.h:111] 2: [builder.cpp::buildSerializedNetwork::417] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed.)
0 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
1 std::unique_ptr<paddle::PaddlePredictor, std::default_delete
FatalError: Segmentation fault
is detected by the operating system.
[TimeInfo: Aborted at 1674105522 (unix time) try "date -d @1674105522" if you are using GNU date ]
[SignalInfo: SIGSEGV (@0x8) received by PID 10018 (TID 0x7f7ee9f170) from PID 8 ]
问题确认 Search before asking
请提出你的问题 Please ask your question
----------- Running Arguments ----------- DET: batch_size: 1 model_dir: run/infer_weights/mot_ppyoloe_s_36e_ppvehicle/ MOT: batch_size: 1 enable: true model_dir: run/infer_weights/mot_ppyoloe_s_36e_ppvehicle/ skip_frame_num: 1 tracker_config: deploy/pipeline/config/tracker_config.yml VEHICLE_ATTR: batch_size: 1 color_threshold: 0.5 enable: true model_dir: run/infer_weights/vehicle_attribute_model/ type_threshold: 0.5 VEHICLE_PLATE: det_limit_side_len: 736 det_limit_type: min det_model_dir: run/infer_weights/ch_PP-OCRv3_det_infer/ enable: true rec_batch_num: 6 rec_image_shape:
Multi-Object Tracking enabled Vehicle Plate Recognition enabled Vehicle Attribute Recognition enabled DET model dir: run/infer_weights/mot_ppyoloe_s_36e_ppvehicle/ MOT model dir: run/infer_weights/mot_ppyoloe_s_36e_ppvehicle/ det_model_dir model dir: run/infer_weights/ch_PP-OCRv3_det_infer/ rec_model_dir model dir: run/infer_weights/ch_PP-OCRv3_rec_infer/ VEHICLE_ATTR model dir: run/infer_weights/vehicle_attribute_model/ E0118 11:42:01.348984 35152 helper.h:114] 2: [convolutionBuilder.cpp::createConvolution::239] Error Code 2: Intern al Error (Assertion isOpConsistent(convolution.get()) failed. Cask convolution isconsitent check failed.) E0118 11:42:01.363926 35152 helper.h:114] 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Err or (Assertion engine != nullptr failed. )
C++ Traceback (most recent call last):
0 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&) 1 std::unique_ptr<paddle::PaddlePredictor, std::default_delete > paddle::CreatePaddlePr edictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
2 paddle::AnalysisPredictor::Init(std::shared_ptr const&, std::shared_ptr const&)
3 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr const&)
4 paddle::AnalysisPredictor::OptimizeInferenceProgram()
5 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument)
6 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument)
7 paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_d elete >)
8 paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph) const
9 paddle::inference::analysis::TensorRtSubgraphPass::ApplyImpl(paddle::framework::ir::Graph) const
10 paddle::inference::analysis::TensorRtSubgraphPass::CreateTensorRTOp(paddle::framework::ir::Node, paddle::fram ework::ir::Graph, std::vector<std::string, std::allocator > const&, std::vector<std::string, std::a llocator >) const
11 paddle::inference::tensorrt::OpConverter::ConvertBlockToTRTEngine(paddle::framework::BlockDesc, paddle::frame work::Scope const&, std::vector<std::string, std::allocator > const&, std::unordered_set<std::string , std::hash, std::equal_to, std::allocator > const&, std::vector<std::st ring, std::allocator > const&, paddle::inference::tensorrt::TensorRTEngine*)
12 paddle::inference::tensorrt::TensorRTEngine::FreezeNetwork()
Error Message Summary:
FatalError:
Segmentation fault
is detected by the operating system. [TimeInfo: Aborted at 1674013321 (unix time) try "date -d @1674013321" if you are using GNU date ] [SignalInfo: SIGSEGV (@0x8) received by PID 35152 (TID 0xffff946a97e0) from PID 8 ]./run.sh: line 18: 35152 Segmentation fault (core dumped) python deploy/pipeline/pipeline.py --config run/con fig/infer_cfg_vehicle_attr.yml --output_dir run/output --video_file run/video_file/test0.mp4 --device gpu --run_mo de trt_fp16