talebolano / TensorRT-Yolov3

TensorRT for Yolov3
MIT License
49 stars 16 forks source link

CaffeParser: Could not parse deploy file #4

Closed github2016-yuan closed 4 years ago

github2016-yuan commented 4 years ago

I follow your step and get TensorRT_my.exe in vs2015. But when I use order in cmd I run into some error: TensorRT_my.exe ####### input args####### C=3; H=608; W=608; caffemodel=yolov3_416.caffemodel; calib=; cam=0; class=80; classname=coco.name; display=1; evallist=; input=test.jpg; inputstream=cam; mode=fp32; nms=0.450000; outputs=yolo-det; prototxt=yolov3_416.prototxt; savefile=result; saveimg=0; videofile=sample.mp4; ####### end args####### init plugin proto: yolov3_416.prototxt caffemodel: yolov3_416.caffemodel Begin parsing model... [libprotobuf ERROR E:\Perforce\rboissel_devdt_windows\sw\gpgpu\MachineLearning\DIT\dev\nvmake\externals\protobuf\3.0.0\src\google\protobuf\text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2622:20: Message type "ditcaffe.LayerParameter" has no field named "upsample_param". ERROR: CaffeParser: Could not parse deploy file ERROR: ssd_error_log: Fail to parse

Win + vs2015 + cuda10.0 + cudnn7.6.5 It seems that I need to install protobuf in my system?

talebolano commented 4 years ago

Please comment out all the parameters of upsample layer in the prototype file, such as:

layer { bottom: "layer19-conv" top: "layer20-upsample" name: "layer20-upsample" type: "Upsample"

upsample_param {

scale: 2

}

}

------------------ 原始邮件 ------------------ 发件人: "github2016-yuan"<notifications@github.com>; 发送时间: 2020年4月1日(星期三) 下午5:15 收件人: "talebolano/TensorRT-Yolov3"<TensorRT-Yolov3@noreply.github.com>; 抄送: "Subscribed"<subscribed@noreply.github.com>; 主题: [talebolano/TensorRT-Yolov3] CaffeParser: Could not parse deploy file (#4)

I follow your step and get TensorRT_my.exe in vs2015. But when I use order in cmd I run into some error: TensorRT_my.exe ####### input args####### C=3; H=608; W=608; caffemodel=yolov3_416.caffemodel; calib=; cam=0; class=80; classname=coco.name; display=1; evallist=; input=test.jpg; inputstream=cam; mode=fp32; nms=0.450000; outputs=yolo-det; prototxt=yolov3_416.prototxt; savefile=result; saveimg=0; videofile=sample.mp4; ####### end args####### init plugin proto: yolov3_416.prototxt caffemodel: yolov3_416.caffemodel Begin parsing model... [libprotobuf ERROR E:\Perforce\rboissel_devdt_windows\sw\gpgpu\MachineLearning\DIT\dev\nvmake\externals\protobuf\3.0.0\src\google\protobuf\text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2622:20: Message type "ditcaffe.LayerParameter" has no field named "upsample_param". ERROR: CaffeParser: Could not parse deploy file ERROR: ssd_error_log: Fail to parse

Win + vs2015 + cuda10.0 + cudnn7.6.5 It seems that I need to install protobuf in my system?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

github2016-yuan commented 4 years ago

 Please comment out all the parameters of upsample layer in the prototype file, such as: layer {     bottom: "layer19-conv"     top: "layer20-upsample"     name: "layer20-upsample"     type: "Upsample"     #upsample_param {     #    scale: 2     #} } ------------------ 原始邮件 ------------------ 发件人: "github2016-yuan"<notifications@github.com>; 发送时间: 2020年4月1日(星期三) 下午5:15 收件人: "talebolano/TensorRT-Yolov3"<TensorRT-Yolov3@noreply.github.com>; 抄送: "Subscribed"<subscribed@noreply.github.com>; 主题: [talebolano/TensorRT-Yolov3] CaffeParser: Could not parse deploy file (#4) I follow your step and get TensorRT_my.exe in vs2015. But when I use order in cmd I run into some error: TensorRT_my.exe ####### input args####### C=3; H=608; W=608; caffemodel=yolov3_416.caffemodel; calib=; cam=0; class=80; classname=coco.name; display=1; evallist=; input=test.jpg; inputstream=cam; mode=fp32; nms=0.450000; outputs=yolo-det; prototxt=yolov3_416.prototxt; savefile=result; saveimg=0; videofile=sample.mp4; ####### end args####### init plugin proto: yolov3_416.prototxt caffemodel: yolov3_416.caffemodel Begin parsing model... [libprotobuf ERROR E:\Perforce\rboissel_devdt_windows\sw\gpgpu\MachineLearning\DIT\dev\nvmake\externals\protobuf\3.0.0\src\google\protobuf\text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2622:20: Message type "ditcaffe.LayerParameter" has no field named "upsample_param". ERROR: CaffeParser: Could not parse deploy file ERROR: ssd_error_log: Fail to parse Win + vs2015 + cuda10.0 + cudnn7.6.5 It seems that I need to install protobuf in my system? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

It works after I comment out the params. I use order like this: TensorRT_my.exe --caffemodel=yolov3_608.caffemodel --prototxt=yolov3_608_trt.prototxt --display=1 --inputstream=video --videofile=sample.mp4 --classname=coco.names It just runs fast in black cmd screen and I get the result display of red screen with some white words form coco.names. It is normal? I still test image like this: TensorRT_my.exe --caffemodel=yolov3_608.caffemodel --prototxt=yolov3_608_trt.prototxt --display=1 --inputstream=image --image=test.jpg --classname=coco.names It still runs very fast. It confuses me and I think I should get the image with som red boxes just like https://github.com/talebolano/TensorRT-Yolov3#example I believe your greate job is to test the performance of TensorRT and that is what I am very interested in. Hope to get reply from you again. Thanks in advance.

talebolano commented 4 years ago

This is not normal. Please check configs.h and YoloConfigs.h, make sure that h and w are both 608, delete the generated *.engine file, and retest with the following formula: TensorRT_my.exe --caffemodel=yolov3_608.caffemodel --prototxt=yolov3_608_trt.prototxt --display=1 --inputstream=video --videofile=sample.mp4 --classname=coco.names --H=608 --W=608

------------------ 原始邮件 ------------------ 发件人: "github2016-yuan"<notifications@github.com>; 发送时间: 2020年4月2日(星期四) 上午10:31 收件人: "talebolano/TensorRT-Yolov3"<TensorRT-Yolov3@noreply.github.com>; 抄送: "寒蝉凄切"<240447420@qq.com>;"Comment"<comment@noreply.github.com>; 主题: Re: [talebolano/TensorRT-Yolov3] CaffeParser: Could not parse deploy file (#4)

 Please comment out all the parameters of upsample layer in the prototype file, such as: layer {     bottom: "layer19-conv"     top: "layer20-upsample"     name: "layer20-upsample"     type: "Upsample"     #upsample_param {     #    scale: 2     #} } … ------------------ 原始邮件 ------------------ 发件人: "github2016-yuan"<notifications@github.com>; 发送时间: 2020年4月1日(星期三) 下午5:15 收件人: "talebolano/TensorRT-Yolov3"<TensorRT-Yolov3@noreply.github.com>; 抄送: "Subscribed"<subscribed@noreply.github.com>; 主题: [talebolano/TensorRT-Yolov3] CaffeParser: Could not parse deploy file (#4) I follow your step and get TensorRT_my.exe in vs2015. But when I use order in cmd I run into some error: TensorRT_my.exe ####### input args####### C=3; H=608; W=608; caffemodel=yolov3_416.caffemodel; calib=; cam=0; class=80; classname=coco.name; display=1; evallist=; input=test.jpg; inputstream=cam; mode=fp32; nms=0.450000; outputs=yolo-det; prototxt=yolov3_416.prototxt; savefile=result; saveimg=0; videofile=sample.mp4; ####### end args####### init plugin proto: yolov3_416.prototxt caffemodel: yolov3_416.caffemodel Begin parsing model... [libprotobuf ERROR E:\Perforce\rboissel_devdt_windows\sw\gpgpu\MachineLearning\DIT\dev\nvmake\externals\protobuf\3.0.0\src\google\protobuf\text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2622:20: Message type "ditcaffe.LayerParameter" has no field named "upsample_param". ERROR: CaffeParser: Could not parse deploy file ERROR: ssd_error_log: Fail to parse Win + vs2015 + cuda10.0 + cudnn7.6.5 It seems that I need to install protobuf in my system? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

It works after I comment out the params. I use order like this: TensorRT_my.exe --caffemodel=yolov3_608.caffemodel --prototxt=yolov3_608_trt.prototxt --display=1 --inputstream=video --videofile=sample.mp4 --classname=coco.names It just run fast in black cmd screen and I get the result display of red screen with some white words form coco.names. It is normal? I still test image like this: TensorRT_my.exe --caffemodel=yolov3_608.caffemodel --prototxt=yolov3_608_trt.prototxt --display=1 --inputstream=image --image=test.jpg --classname=coco.names It still run very fast. It confuses me and I think I should get the image with som red boxes just like https://github.com/talebolano/TensorRT-Yolov3#example I believe your greate job is to test the performance of TensorRT and that is what I am very interested in. Hope to get reply from you again. Thanks in advance.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

github2016-yuan commented 4 years ago

@talebolano NICE! It finally works at mod fp16 and fp 32. When I run the order at mod int 8, I get error because lack of cal.list. I wonder how to get it or create it myself?

talebolano commented 4 years ago

You should prepare more than 100 pictures in a file, write the address of each picture in cal.list

------------------ 原始邮件 ------------------ 发件人: "github2016-yuan"<notifications@github.com>; 发送时间: 2020年4月3日(星期五) 下午2:55 收件人: "talebolano/TensorRT-Yolov3"<TensorRT-Yolov3@noreply.github.com>; 抄送: "寒蝉凄切"<240447420@qq.com>;"Mention"<mention@noreply.github.com>; 主题: Re: [talebolano/TensorRT-Yolov3] CaffeParser: Could not parse deploy file (#4)

@talebolano NICE! It finally works at mod fp16 and fp 32. When I run the order at mod int 8, I get error because lack of cal.list. I wonder how to get it or create it myself?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

github2016-yuan commented 4 years ago

Sorry for some delay. I put 116 images in a file called images and put the path of each image in cal.list images/00f84af0bf5047da.jpg images/00faadd9350f458e.jpg images/00face220e455567.jpg images/00fbf5d41aae7e48.jpg images/00fe124a798aafc1.jpg

All the images are airplane from open image v4 dataset. Then I use the order:TensorRT_my.exe --caffemodel=yolov3_608.caffemodel --prototxt=yolov3_608_trt.prototxt --display=1 --inputstream=video --videofile=sample.mp4 --classname=coco.names --mode=int8 --calib=cal.list

The output is: find calibration file, loading ... then it works. But the inference time is about 38ms, almost the same with fp16 mode. It seems that int8 mode does not reduce inference time and it confuses me a lot that all the 3 modes take almost the same inference time, about 38ms. Something wrong with the images? What are the special requirements for the more than 100 images?

My environment is: Win10 GTX1070 CPU i7-9700K

talebolano commented 4 years ago

That's normal. Only a small number of nvidia gpus support int8 mode, such as tesla p40. But the float16 pattern should be better than float32, check that the correct engine file is loaded.

---Original--- From: "github2016-yuan"<notifications@github.com> Date: Tue, Apr 7, 2020 10:19 AM To: "talebolano/TensorRT-Yolov3"<TensorRT-Yolov3@noreply.github.com>; Cc: "Yao Wei"<240447420@qq.com>;"Mention"<mention@noreply.github.com>; Subject: Re: [talebolano/TensorRT-Yolov3] CaffeParser: Could not parse deploy file (#4)

Sorry for some delay. I put 116 images in a file called images and put the path of each image in cal.list images/00f84af0bf5047da.jpg images/00faadd9350f458e.jpg images/00face220e455567.jpg images/00fbf5d41aae7e48.jpg images/00fe124a798aafc1.jpg

All the images are airplane from open image v4 dataset. Then I use the order:TensorRT_my.exe --caffemodel=yolov3_608.caffemodel --prototxt=yolov3_608_trt.prototxt --display=1 --inputstream=video --videofile=sample.mp4 --classname=coco.names --mode=int8 --calib=cal.list

The output is: find calibration file, loading ... then it works. But the inference time is about 38ms, almost the same with fp16 mode. It seems that int8 mode does not reduce inference time and it confuses me a lot that all the 3 modes take almost the same inference time, about 38ms. Something wrong with the images? What are the special requirements for the more than 100 images?

My environment is: Win10 GTX1070 CPU i7-9700K

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

github2016-yuan commented 4 years ago

OK, really appreciate you on the great work you do and all the replies you offer me. I will keep going on TensorRT.