Closed Ricardozzf closed 5 years ago
Hi,@Ricardozzf. TensorRT caffe parser can't check the param not in it's default proto file, although I added it as plugin. You need to comment the "upsample_param" but still leave the type "Upsample". Then the running can be OK.
Thanks.
As your suggestion, I have converted caffe-model to fp32 and fp16 successfully. But, I fail to convert model to int8 precision, when I run the conversion process I get this error:
NvPluginYOLO.cu:58: virtual void nvinfer1::plugin::PReLU::configure(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, int): Assertion
mBatchDim == 1' failed. `
The .calib
file seem to be incorrect too, it contains key words Unnamed ITensor*
, so I guess some wrong in PRelu
module.
My tensorrt version is 4.0.1.6, do you have some solution?
Ok, after adding plugin PRelu-laye by myself, the error still exits. FP32 and FP16 is all right, INT8 is wrong.
I checked it. When meeting error mBatchDim == 1' failed
, re-run install exe and get final result. Overwriting a new PReLU CLASS is a available way to avoid the error, but it will delay 1ms than default PReLU.
Finally, I found a bug in main.cpp
, line 166
auto detects = get_detections(outputData.get(),width,height,h,w,&nboxes,classes);
should be
auto detects = get_detections(outputData.get(),width,height,w,h,&nboxes,classes);
hi @Ricardozzf can you get correct result in FP32 and FP16 mode? many thanks
hi, @Ricardozzf yes, the PReLU error maybe wrong inside in the tensorRT of createPReLUPlugin implements. I test int8 calibration for other caffemodel it is OK. Although running Yolov3 failed in first time, the calibration output calib file is still output right . As you said, re-run the program will use the calibration cached file.Then running int8 mode fine.
And thank you for point out the bug in main.cpp. I updated it right now.
@Yuuuuuuuuuuuuuuuuuummy I don't have suitable platform to test FP16, but I get correct result in FP32 and INT8.
Hi, I am trying to use tensorrt on caffe based Faster RCNN . But I am facing the below error similar to the above when working on NVIDIA jetson tx2 platform.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter : 290 : 21 : Message type "ditcaffe.LayerParameter" has no field named 'roi_pooling_param".
CaffeParse : Could not parse deploy file
ConvertCaffeToTrtModel: ConvertCaffeToTrtModel_main.cpp:95: void caffeToTRTMODEL(const string&, const string&, const std::vector<std::__cxx11::basic_string
Hi,@Ricardozzf. TensorRT caffe parser can't check the param not in it's default proto file, although I added it as plugin. You need to comment the "upsample_param" but still leave the type "Upsample". Then the running can be OK.
Thanks.
I'm not sure as to how to approach this. Can you explain yourself in more detail? Sorry for the inconvenience
Hi, when running this repo, I met some error
It look like my prototxt file mismatch repo's file, my upsample file is there