jkjung-avt / tensorrt_demos

TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet
https://jkjung-avt.github.io/
MIT License
1.75k stars 548 forks source link

python3: yolo_layer.cu:116: virtual nvinfer1::Dims nvinfer1::YoloLayerPlugin::getOutputDimensions(int, const nvinfer1::Dims*, int): Assertion `inputs[0].d[1] == mYoloHeight' failed. Aborted (core dumped) #278

Closed Darshcg closed 3 years ago

Darshcg commented 4 years ago

Hi @jkjung-avt,

I am using your tensorrt_demos for my Custom models. Yolo_to_onnx conversion is working fine, but during onnx_to_trt conversion I am getting an error as: *python3: yolo_layer.cu:116: virtual nvinfer1::Dims nvinfer1::YoloLayerPlugin::getOutputDimensions(int, const nvinfer1::Dims, int): Assertion `inputs[0].d[1] == mYoloHeight' failed. Aborted (core dumped)**

what am I doing wrong? Is this Dimension related issue?

Thank you

Darshcg commented 4 years ago

The command I used: python3 onnx_to_tensorrt.py -m yolov3-custom-608 --category_num 5

jkjung-avt commented 4 years ago

Since you did not provide more details about what modifications you've made in your custom model, I can only make a wild guess...

Please make sure you have marked the correct layers as outputs.

https://github.com/jkjung-avt/tensorrt_demos/blob/e039e0824876d46443aa19e9ad7cf8f7723c713e/yolo/yolo_to_onnx.py#L905-L916

sapkota-saroj commented 4 years ago

Hello, I have the same issue when I tried to experiment on 416416 instead of 608608 for int 8 in dla. I didn"t change anything expect height and width of cfg file. But I got the same error as above command ::::: ln -s yolov3-416.cfg yolov3-dla0-416.cfg ln -s yolov3-416.onnx yolov3-dla0-416.onnx python3 onnx_to_tensorrt.py -v --int8 --dla_core 0 -m yolov3-dla0-416 can you please help me what's the mistake here.

jkjung-avt commented 4 years ago

Refer to source code here:

https://github.com/jkjung-avt/tensorrt_demos/blob/master/plugins/yolo_layer.cu#L116

This assertion error has nothing to do with DLA or INT8. It indicates the H dimension of the input tensor (or feature map) does not match the expected value: one of (416 // 32), (416 //16) or (416 // 8).

Are you using the downloaded yolov3 (coco) model? My code should work out of the box for that model. ..

Darshcg commented 4 years ago

Hi @jkjung-avt,

It is working for 608 dimensions perfectly, but for 288 and 416 it is throwing an error. This is for my custom Model. Command used: python3 onnx_to_tensorrt.py -m yolov3-spp-288 --category_num 7 python3 onnx_to_tensorrt.py -m yolov3-spp-416 --category_num 7

And my Model is trained on 608 dimension with 7 classes

jkjung-avt commented 4 years ago

@Darshcg I still have no clue, with the limited information you have provided so far.

sapkota-saroj commented 4 years ago

Yes, I'm using it out of the box. Without using dla it works fine but while using dla it throws error as mentioned above. I exactly flow the same steps you provided in README file for tensorrt section it works fine for int8 also. But when I use dla with int8 it throws error for 416. Let me remind you all works fine for 608 (dla with int 8) . I haven't tested 288 though.

jkjung-avt commented 4 years ago

@sapkota-saroj

But when I use dla with int8 it throws error for 416. Let me remind you all works fine for 608 (dla with int 8).

That is very strange. I myself don't see such an issue. And I failed to come up with a reason why "yolov4-608" works but "yolov4-416" would fail under the same settings...

Darshcg commented 3 years ago

Since you did not provide more details about what modifications you've made in your custom model, I can only make a wild guess...

Please make sure you have marked the correct layers as outputs.

https://github.com/jkjung-avt/tensorrt_demos/blob/e039e0824876d46443aa19e9ad7cf8f7723c713e/yolo/yolo_to_onnx.py#L905-L916

This resolved me. Thank you @jkjung-avt