lewes6369 / TensorRT-Yolov3

TensorRT for Yolov3
MIT License
487 stars 165 forks source link

No field named "upsample_param" #2

Closed Ricardozzf closed 5 years ago

Ricardozzf commented 5 years ago

Hi, when running this repo, I met some error

--input=./2.jpg
--caffemodel=yolov3.caffemodel
--prototxt=yolov3.prototxt
####### input args#######
C=3;H=416;W=416;caffemodel=yolov3.caffemodel;calib=;class=20;input=./2.jpg;mode=fp32;outputNodes=layer82-conv,layer94-conv,layer106-conv;prototxt=yolov3.prototxt;
init plugin proto: yolov3.prototxt caffemodel: yolov3.caffemodel
Begin parsing model...
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2267:18: Message type "ditcaffe.LayerParameter" has no field named "upsample_param".
ERROR: CaffeParser: Could not parse deploy file
ERROR: ssd_error_log: Fail to parse
Segmentation fault (core dumped)

It look like my prototxt file mismatch repo's file, my upsample file is there

lewes6369 commented 5 years ago

Hi,@Ricardozzf. TensorRT caffe parser can't check the param not in it's default proto file, although I added it as plugin. You need to comment the "upsample_param" but still leave the type "Upsample". Then the running can be OK.

Thanks.

Ricardozzf commented 5 years ago

As your suggestion, I have converted caffe-model to fp32 and fp16 successfully. But, I fail to convert model to int8 precision, when I run the conversion process I get this error:

NvPluginYOLO.cu:58: virtual void nvinfer1::plugin::PReLU::configure(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, int): AssertionmBatchDim == 1' failed. `

The .calib file seem to be incorrect too, it contains key words Unnamed ITensor*, so I guess some wrong in PRelu module.

My tensorrt version is 4.0.1.6, do you have some solution?

Ricardozzf commented 5 years ago

Ok, after adding plugin PRelu-laye by myself, the error still exits. FP32 and FP16 is all right, INT8 is wrong.

Ricardozzf commented 5 years ago

I checked it. When meeting error mBatchDim == 1' failed, re-run install exe and get final result. Overwriting a new PReLU CLASS is a available way to avoid the error, but it will delay 1ms than default PReLU. Finally, I found a bug in main.cpp, line 166 auto detects = get_detections(outputData.get(),width,height,h,w,&nboxes,classes); should be auto detects = get_detections(outputData.get(),width,height,w,h,&nboxes,classes);

Yuuuuuuuuuuuuuuuuuummy commented 5 years ago

hi @Ricardozzf can you get correct result in FP32 and FP16 mode? many thanks

lewes6369 commented 5 years ago

hi, @Ricardozzf yes, the PReLU error maybe wrong inside in the tensorRT of createPReLUPlugin implements. I test int8 calibration for other caffemodel it is OK. Although running Yolov3 failed in first time, the calibration output calib file is still output right . As you said, re-run the program will use the calibration cached file.Then running int8 mode fine.

And thank you for point out the bug in main.cpp. I updated it right now.

Ricardozzf commented 5 years ago

@Yuuuuuuuuuuuuuuuuuummy I don't have suitable platform to test FP16, but I get correct result in FP32 and INT8.

CODE-for-VISION commented 5 years ago

Hi, I am trying to use tensorrt on caffe based Faster RCNN . But I am facing the below error similar to the above when working on NVIDIA jetson tx2 platform.

[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter : 290 : 21 : Message type "ditcaffe.LayerParameter" has no field named 'roi_pooling_param". CaffeParse : Could not parse deploy file ConvertCaffeToTrtModel: ConvertCaffeToTrtModel_main.cpp:95: void caffeToTRTMODEL(const string&, const string&, const std::vector<std::__cxx11::basic_string >&, unsigned int, unsigned int, nvinfer1::IHostMemory*&, int): Assertion 'blobNameToTensor != nullptr' failed. Aborted (core dumped)

doraemon96 commented 5 years ago

Hi,@Ricardozzf. TensorRT caffe parser can't check the param not in it's default proto file, although I added it as plugin. You need to comment the "upsample_param" but still leave the type "Upsample". Then the running can be OK.

Thanks.

I'm not sure as to how to approach this. Can you explain yourself in more detail? Sorry for the inconvenience