Closed yghstill closed 2 years ago
@zhouweic36 paddlelite android demo可以参考:https://github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/picodet#deploy
您好,请问训练时保存的参数文件.pdparams比官方的参数量大是为什么呢?另PicoDet技术交流群二维码有新的吗?谢谢
您好,请问训练时保存的参数文件.pdparams比官方的参数量大是为什么呢?另PicoDet技术交流群二维码有新的吗?谢谢
官方文档描述的是参数量,是单独计算出来的,一般大约为模型体积的1/4。PicoDet技术交流群二维码已更新。
大家好,我用pp训练了PicoDet,然后导出模型进行sering部署,但预测时出现主要的错误: {'err_no': 8, 'err_msg': "(data_id=0 log_id=0) [ppyolo|0] Failed to postprocess: 'transpose_17.tmp_0.lod'", 'key': [], 'value': [], 'tensors': []} 请问 fetch_list: 要配置成什么样的? 请问 fetch_list: 要配置成什么样的? 请问 fetch_list: 要配置成什么样的?
serving_server 中的 serving_server_conf.prototxt : feed_var { name: "image" alias_name: "image" is_lod_tensor: false feed_type: 1 shape: 1 shape: 3 shape: 416 shape: 416 } fetch_var { name: "transpose_10.tmp_0" alias_name: "transpose_10.tmp_0" is_lod_tensor: false fetch_type: 1 shape: 1 shape: 2704 shape: 4 } fetch_var { name: "transpose_11.tmp_0" alias_name: "transpose_11.tmp_0" is_lod_tensor: false fetch_type: 1 shape: 1 shape: 2704 shape: 32 } fetch_var { name: "transpose_12.tmp_0" alias_name: "transpose_12.tmp_0" is_lod_tensor: false fetch_type: 1 shape: 1 shape: 676 shape: 4 } fetch_var { name: "transpose_13.tmp_0" alias_name: "transpose_13.tmp_0" is_lod_tensor: false fetch_type: 1 shape: 1 shape: 676 shape: 32 } fetch_var { name: "transpose_14.tmp_0" alias_name: "transpose_14.tmp_0" is_lod_tensor: false fetch_type: 1 shape: 1 shape: 169 shape: 4 } fetch_var { name: "transpose_15.tmp_0" alias_name: "transpose_15.tmp_0" is_lod_tensor: false fetch_type: 1 shape: 1 shape: 169 shape: 32 } fetch_var { name: "transpose_16.tmp_0" alias_name: "transpose_16.tmp_0" is_lod_tensor: false fetch_type: 1 shape: 1 shape: 49 shape: 4 } fetch_var { name: "transpose_17.tmp_0" alias_name: "transpose_17.tmp_0" is_lod_tensor: false fetch_type: 1 shape: 1 shape: 49 shape: 32 }
我配置的config.yml: dag: is_thread_op: false tracer: interval_s: 30 http_port: 18888 op: ppyolo: concurrency: 1
local_service_conf:
client_type: local_predictor
device_type: 1
devices: '0'
fetch_list:
- transpose_17.tmp_0
model_config: serving_server/
rpc_port: 9998 worker_num: 2
请问 fetch_list: 要配置成什么样的? 请问 fetch_list: 要配置成什么样的?
python tools/export_model.py -c configs/picodet/picodet_L_640_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/pretrained/ESNet_x1_25_pretrained.pdparams --output_dir=inference_model TestReader.inputs_def.image_shape=[3,640,640] 1改了下激活函数+2执行上面的指令,一路调试,现在遇到这个报错,实在不知道怎么走下去了。 [Hint: 'cudaErrorInitializationError'. The API call failed because the CUDA driver and runtime could not be initialized. ] (at /paddle/paddle/fluid/platform/gpu_info.cc:108) 3安装了paddlepaddle-gpu2.2.2+paddledet2.3.0+paddle2onnx0.9.1
另外,想请教下,看了下cfg文件, 数据增强只用了这个吗- RandomFlip: {prob: 0.5}
@sdreamforchen 数据增强包含crop和flip还有RandomDistort
导出未修改的模型预测结果正常吗?需要看下你修改的位置,改了什么激活函数。
您好,我改了hard sigmoid和hard swish(基于relu6实现),因为我的下游嵌入式对这两个函数支持也存在一定的问题; 改了后用eval.py测试了下,各方面都是对的,就是延迟厉害,Titan RTX才17帧,l-640模型。 1 转为onnx后,出现了一个新的问题:我下载的官方的onnx和我转换后的onnx的conv和bn处在差异,官方的是融合一起的,而我的是分开的,这个不知道是为啥,通过netron查看的。 2 另外,通过安装nccl之类的操作,目前用单卡训练,出现我配置export CUDA_VISIBLE_DEVICES=4,但是train还是去用GPU:0,而eval没有问题。 这个问题,我可以慢慢去学习paddle,看看代码,如果方便就麻烦回答一下吧。谢谢。
@sdreamforchen 数据增强包含crop和flip还有RandomDistort
导出未修改的模型预测结果正常吗?需要看下你修改的位置,改了什么激活函数。
之所以改激活函数,是因为转换后的onnx的hardsigmoid函数。下游嵌入式会报错。 今天用netron确认了问题,查看了转换的alpha不是0.2是0.166667
@sdreamforchen
Hello, yghstill PaddleDetection is wonderful platform. Thank you.
Should i ask some question of PaddleDetection-PP-PicoDet with Pytouch to TensorRT?
I found some information about Pytouch to TensorRT. I think that TensorRT improve calculation performance when PP-PicoDet doing Parallel Processing. If your PP-PicoDet team have Idea or Plan about Pytouch to TensorRT, could you tell me?
Thank you.
@tb5874 PP-PicoDet is a mobile model, and it is not planned to support TensorRT, but the upcoming PP-YOLOE model will support the function of TensorRT, welcome to pay attention.
So good! 想请问下,nccl想过的提示是,让我们安装nccl2.这个和conda install -c anaconda nccl是不一样的吗?还是必须用https://docs.nvidia.com/deeplearning/nccl/install-guide/index.html这个方法是安装nccl?
@yghstill Thank you for your kind explanation. So, If i use low computing power( CPU with GPU, like Nvidia Jetson Xavier NX, but not mean restrained Nvidia Platform. ) ( But i know, currently easily applicable GPU is Nvidia. So i don't care if you restrain Nvidia. ) Your best recommand model is upcoming PP-YOLO-E ?
I have AMD Ryzen5 5600G( 6 core, 3.9GHz ) with RTX3060. ( but i have another GPU(RTX3090). So i will test it. ) and Nvidia Jetson Xavier NX( 6 core, 1.4GHz) with Volta architecture(384 CUDAcore, 48 TENSORcore) I just want to test this wonderful Object Detector :)
Target is low computing power, Like above environment. ( Nvidia Jetson Xavier NX or normal desktop )
Should i ask some best recommend of PaddleDetection Object Detector. like above environment ?
Thank you.
@tb5874 It is also recommended that you use the PP-YOLOE model. We tested it on Jetson Xavier NX and it performs well
@yghstill Thank you ! So now, For understand upcomming PP-YOLOE, PP-YOLO is background? or PP-YOLOE is another paper? When i read PP-YOLO paper ( https://arxiv.org/pdf/2007.12099.pdf ), i found PP-YOLO with Method 'E + Grid Sensitive'. That is not mean PP-YOLOE ? X_D..
If that is not mean PP-YOLOE, should i pre-test PP-YOLOE ?
Thank you.
@tb5874 PP-YOLOE is another paper, and it is coming soon.
类似yolo的输出 (13,13,255),(26,26,255)(52,52,255),13x13的特征图,255=8x53 ,82=80 + 5 ,5 = x,y,w,h,c
您好!您的来信我已经收到,我会尽快查阅并回复您的!
picodet 转成MNN最新版本输出的是transpose,以前版本输出的是save_infer_model/scale_4.tmp_1这种,现在demo mnn推理存在问题,能否帮忙看一下
picodet 转成MNN最新版本输出的是transpose,以前版本输出的是save_infer_model/scale_4.tmp_1这种,现在demo mnn推理存在问题,能否帮忙看一下
@cv-nlp 好的我们修复下此问题。
使用picodet_640_l训练模型之后, 使用动态图的文件进行预测结果没有问题, 但是当导出模型之后其中top, bottom 始终为inf是为什么呢?
您好!您的来信我已经收到,我会尽快查阅并回复您的!
您好!您的来信我已经收到,我会尽快查阅并回复您的!
我用paddlex导出的输入为320的PicoDet-s模型为什么参数有3.75M那么多,并不是0.99M呢?我该怎么操作才能为0.99M,我需要在win10系统部署
您好!您的来信我已经收到,我会尽快查阅并回复您的!
这是来自QQ邮箱的假期自动回复邮件。您好,您的邮箱已经收到,谢谢
使用picodet训练完测试能跑通,但导出模型显示这个,应该怎么办,是版本问题么,是的话应该更新什么版本呢?
我的版本
您好!您的来信我已经收到,我会尽快查阅并回复您的!
这是来自QQ邮箱的假期自动回复邮件。您好,您的邮箱已经收到,谢谢
亲们,谁有交流群发我一下,谢谢!
这是来自QQ邮箱的假期自动回复邮件。您好,您的邮箱已经收到,谢谢
您好!您的来信我已经收到,我会尽快查阅并回复您的!
亲们,谁有交流群发我一下,谢谢!
找到了吗?
PP-PicoDet是轻量级实时移动端目标检测模型,我们提出了从小到大的一系列模型,包括S、M、L等,超越现有SOTA模型。
模型特色:
链接:
欢迎大家试用,有疑问欢迎讨论盖楼~
和其他模型对比:
FAQ汇总: (持续更新中)
__base__
中配置,picodet_x_coco.yml中的所有设置会覆盖__base__
中配置,所以修改picodet_x_coco.yml的配置即可。为了方便大家交流沟通,欢迎扫码添加微信群,继续交流有关PP-PicoDet的使用及建议~