PaddlePaddle / PaddleYOLO

🚀🚀🚀 YOLO series of PaddlePaddle implementation, PP-YOLOE+, RT-DETR, YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv10, YOLOX, YOLOv5u, YOLOv7u, YOLOv6Lite, RTMDet and so on. 🚀🚀🚀
https://github.com/PaddlePaddle/PaddleYOLO
GNU General Public License v3.0
547 stars 133 forks source link

YOLOv8提供的baseline模型下载链接失效? #150

Closed ShiMinghao0208 closed 6 months ago

ShiMinghao0208 commented 1 year ago

问题确认 Search before asking

请提出你的问题 Please ask your question

YOLOv8提供的baseline模型下载链接失效,无法下载 顺便想问一下 想试试paddle的YOLO系列其他模型放进pphumanv2里是直接替换模型文件就可以吗

nemonameless commented 1 year ago

请问是哪个链接失效? 如果是coco训好的权重,head是80类。pphuman的权重是fintune过行人数据集训的,head是1类。你可以用paddleyolo的coco权重试试,应该会检出很多其他类

ShiMinghao0208 commented 1 year ago

请问是哪个链接失效? 如果是coco训好的权重,head是80类。pphuman的权重是fintune过行人数据集训的,head是1类。你可以用paddleyolo的coco权重试试,应该会检出很多其他类 MODEL ZOO里面的下载链接失效了https://github.com/PaddlePaddle/PaddleYOLO/blob/release/2.6/docs/MODEL_ZOO_cn.md#YOLOv8 下载后出现: image

ShiMinghao0208 commented 1 year ago

请问是哪个链接失效? 如果是coco训好的权重,head是80类。pphuman的权重是fintune过行人数据集训的,head是1类。你可以用paddleyolo的coco权重试试,应该会检出很多其他类

我在这里找到了yolov8模型的下载https://github.com/PaddlePaddle/PaddleYOLO/tree/release/2.6/configs/yolov8 但我目前是使用的yolov8官方模型和官方导出的export代码导出了paddle格式的model,放入pphuman中出现如下错误 Traceback (most recent call last): File "deploy/pipeline/pipeline.py", line 1108, in <module> main() File "deploy/pipeline/pipeline.py", line 1095, in main pipeline.run_multithreads() File "deploy/pipeline/pipeline.py", line 172, in run_multithreads self.predictor.run(self.input) File "deploy/pipeline/pipeline.py", line 490, in run self.predict_video(input, thread_idx=thread_idx) File "deploy/pipeline/pipeline.py", line 674, in predict_video reuse_det_result=reuse_det_result) File "/workdir/Paddle/PaddleDetection-2.5_test/deploy/pptracking/python/mot_sde_infer.py", line 478, in predict_image inputs = self.preprocess(batch_image_list) File "/workdir/Paddle/PaddleDetection-2.5_test/deploy/pptracking/python/det_infer.py", line 142, in preprocess input_tensor.copy_from_cpu(inputs[input_names[i]]) KeyError: 'x0' 看错误我定位在det_infer.py的137行 input_names = self.predictor.get_input_names()得到的值不太对,不知道这里是不是我转换过来的paddle 模型有问题?

还有一个问题是paddleyolo里找到的yolov8模型下载链接,也就是我最开始发的链接,和pphuman所需的模型不太一致?我需要转换吗还是什么?因为我看pphuman里推理模型还需要model.pdiparams,model.pdiparams.info文件

ShiMinghao0208 commented 1 year ago

请问是哪个链接失效? 如果是coco训好的权重,head是80类。pphuman的权重是fintune过行人数据集训的,head是1类。你可以用paddleyolo的coco权重试试,应该会检出很多其他类

我看到了paddle.inference导出模型的文档大致对第二个问题有了了解了,不好意思

ShiMinghao0208 commented 1 year ago

请问是哪个链接失效? 如果是coco训好的权重,head是80类。pphuman的权重是fintune过行人数据集训的,head是1类。你可以用paddleyolo的coco权重试试,应该会检出很多其他类

您好,目前在pphumanv2中调用yolov8模型,已经成功,但是使用run_mode=trt_fp16时会出现错误: `E0605 03:15:22.077699 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]). E0605 03:15:22.077817 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]). E0605 03:15:22.077869 48 helper.h:111] Parameter check failed at: ../builder/Layers.h::setAxis::381, condition: axis >= 0 && axis < Dims::MAX_DIMS E0605 03:15:22.078172 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]). E0605 03:15:22.078222 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]). E0605 03:15:22.078528 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]). E0605 03:15:22.078585 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]). E0605 03:15:22.079020 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]). E0605 03:15:22.079092 48 helper.h:111] Could not compute dimensions for tmp_711311, because the network is not valid. E0605 03:15:22.079151 48 helper.h:111] Network validation failed. Traceback (most recent call last): File "deploy/pipeline/pipeline.py", line 1108, in main() File "deploy/pipeline/pipeline.py", line 1093, in main pipeline = Pipeline(FLAGS, cfg) File "deploy/pipeline/pipeline.py", line 89, in init self.predictor = PipePredictor(args, cfg, self.is_video) File "deploy/pipeline/pipeline.py", line 467, in init region_polygon=self.region_polygon) File "/workdir/Paddle/PaddleDetection-2.5_test/deploy/pptracking/python/mot_sde_infer.py", line 119, in init threshold=threshold, ) File "/workdir/Paddle/PaddleDetection-2.5_test/deploy/pptracking/python/det_infer.py", line 113, in init enable_mkldnn=enable_mkldnn) File "/workdir/Paddle/PaddleDetection-2.5_test/deploy/pptracking/python/det_infer.py", line 486, in load_predictor predictor = create_predictor(config) SystemError:


C++ Traceback (most recent call last):

0 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&) 1 std::unique_ptr<paddle::PaddlePredictor, std::default_delete > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&) 2 paddle::AnalysisPredictor::Init(std::shared_ptr const&, std::shared_ptr const&) 3 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr const&) 4 paddle::AnalysisPredictor::OptimizeInferenceProgram() 5 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument) 6 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument) 7 paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_delete >) 8 paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph) const 9 paddle::inference::analysis::TensorRtSubgraphPass::ApplyImpl(paddle::framework::ir::Graph) const 10 paddle::inference::analysis::TensorRtSubgraphPass::CreateTensorRTOp(paddle::framework::ir::Node, paddle::framework::ir::Graph, std::vector<std::string, std::allocator > const&, std::vector<std::string, std::allocator >) const 11 paddle::inference::tensorrt::OpConverter::ConvertBlockToTRTEngine(paddle::framework::BlockDesc, paddle::framework::Scope const&, std::vector<std::string, std::allocator > const&, std::unordered_set<std::string, std::hash, std::equal_to, std::allocator > const&, std::vector<std::string, std::allocator > const&, paddle::inference::tensorrt::TensorRTEngine) 12 paddle::inference::tensorrt::TensorRTEngine::FreezeNetwork() 13 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const, int) 14 phi::enforce::GetCurrentTraceBackStringabi:cxx11


Error Message Summary:

FatalError: Build TensorRT cuda engine failed! Please recheck you configurations related to paddle-TensorRT. [Hint: inferengine should not be null.] (at /home/paddle/data/xly/workspace/23278/Paddle/paddle/fluid/inference/tensorrt/engine.cc:296)`

用paddle推理没有问题,用paddle+trt就会出现size错误,请问一下这是某种算子不支持导致的吗?还是说可以解决

ShiMinghao0208 commented 1 year ago

请问是哪个链接失效? 如果是coco训好的权重,head是80类。pphuman的权重是fintune过行人数据集训的,head是1类。你可以用paddleyolo的coco权重试试,应该会检出很多其他类

还有一个问题是,我用paddle做推理,模型初始化什么的都很快,用paddle+trt_fp16推理,初始化需要很久,但后续推理图片的速度的确有提高,请问这里初始化很久的问题是什么原因

nemonameless commented 6 months ago

当第一次运行一个模型时,TensorRT 需要对模型进行优化,包括层融合、精度校准、内核选择等。这个过程需要时间。 size错误是因为v8的导出权重默认是不带nms的,你需要带nms导出去使用,文档里有写的。pphuman里原先的ppYoloe权重导出是带了nms所以可以直接用。