Open faedtodd opened 5 years ago
Because the TensorRT parser can not handle the negative slop directly ( the leakyRelu vesion). So I add it as the plugin. As written in the tensorrt header, the PReLu plugin layer performs leaky ReLU for 4D tensors. Give an input value x, the PReLU layer computes the output as x if x > 0 and negative_slope //! x if x <= 0.
Because the TensorRT parser can not handle the negative slop directly ( the leakyRelu vesion). So I add it as the plugin. As written in the tensorrt header, the PReLu plugin layer performs leaky ReLU for 4D tensors. Give an input value x, the PReLU layer computes the output as x if x > 0 and negative_slope //! x if x <= 0.
so, relu layer ganna be repaleced by PRelu layer which has the same performance with leaky layer and upsample layer will replaced by your upsample layer cause caffe doesnt have one
Yes, in the yolov3 model , the relu layer is actually the leaky relu layer. And the upsample layer is not supported by the default TensorRT, so add it as plugins. To gain the same result in the tensorrt, we have to do like this.
[tensorRTWrapper/code/include/PluginFactory.h] line45-line50:
if(isLeakyRelu(layerName)) { assert(nbWeights == 0 && weights == nullptr); mPluginLeakyRelu.emplace_back(std::unique_ptr<INvPlugin, void()(INvPlugin)>(createPReLUPlugin(NEG_SLOPE), nvPluginDeleter)); return mPluginLeakyRelu.back().get(); ...
who can tell me the meaning of this