dlunion / tensorRTIntegrate

TensorRT ONNX Plugin、Inference、Compile
465 stars 114 forks source link

How to design bottleneck layers such as denseblock layer in tensorrt? #13

Open feitiandemiaomi opened 4 years ago

feitiandemiaomi commented 4 years ago

I want to speed up the densenet network model in tensorrt, but when writing the custom layer denseblock layer, I thought of a problem. I used the caffe model as the input. The denseblock layer has only one layer in prototxt of caffe, but it contains 8 convolution layers, and other the operation of BN, then what should I do when defining plugin? Whether to define 8 plugins for 8 convolutional layer and the BN layer separately, or only one plugin include 8 convolutional and BN, here is the example of prototxt. Can you give some advice,thanks for your reply.

DenseBlock 1

layer { name: "DenseBlock1" type: "DenseBlock" bottom: "conv1" top: "DenseBlock1" denseblock_param { numTransition: 8 initChannel: 64 growthRate: 8 Filter_Filler { type: "msra" } BN_Scaler_Filler { type: "constant" value: 1 } BN_Bias_Filler { type: "constant" value: 0 } use_dropout: false dropout_amount: 0.2 } }

dlunion commented 4 years ago

我是建议不要合并为一个block的方法去写。直接展开为卷积、bn等,是几个写几个。也不需要写插件,因为卷积本身是标准操作。 tensorRT对显存和速度本身就有专门的优化,你自己写插件实现的卷积很可能性能会差很多,这是我的意见