Open ersheng-ai opened 3 years ago
Hi @ersheng-ai,
could you outline why those plugins are not needed anymore? Is there a more elegant solution for the yolo layers in deepstream?
I'm implementing yolov4 from scratch via the TensorRT API for deployment via Triton Inference Server in my repo here and currently I have to use two custom layers in a plugin:
How did you manage to eliminate those?
Also I saw you are using the BatchedNMSPlugin. Any chance you can share your code on how to use the plugin correctly? Is the output of the network dynamic?
I implemented yolo layer via Pytorch in this repository: https://github.com/Tianxiaomo/pytorch-YOLOv4/blob/master/tool/yolo_layer.py So, yolo layer can be automatically embedded into ONNX and TensorRT.
Mish is no problem because you can easily use exp
, log
and tanh
to implement mish
via pytorch.
Hi, thanks for the explanation. Coincidentally I implemented mish like this yesterday as well. It gave 60% speedup. TensorRT supports Softplus out of the box btw, no need for exp and log.
Can you tell me how well the BatchedNMSPlugin works for you? Is it worth using or is it complicating things a lot?
I will reorganize the DeepStream code here when I have time. Following files will be removed later because current YOLOv4 deployment does not need them.