Tianxiaomo / pytorch-YOLOv4

PyTorch ,ONNX and TensorRT implementation of YOLOv4
Apache License 2.0
4.46k stars 1.49k forks source link

YOLOv4 deployment to DeepStream does not need yolo C++ plugin #226

Open ersheng-ai opened 3 years ago

ersheng-ai commented 3 years ago

I will reorganize the DeepStream code here when I have time. Following files will be removed later because current YOLOv4 deployment does not need them.

yolo.cpp
yolo.h
yoloPlugins.cpp
yoloPlugins.h
philipp-schmidt commented 3 years ago

Hi @ersheng-ai,

could you outline why those plugins are not needed anymore? Is there a more elegant solution for the yolo layers in deepstream?

I'm implementing yolov4 from scratch via the TensorRT API for deployment via Triton Inference Server in my repo here and currently I have to use two custom layers in a plugin:

How did you manage to eliminate those?

Also I saw you are using the BatchedNMSPlugin. Any chance you can share your code on how to use the plugin correctly? Is the output of the network dynamic?

ersheng-ai commented 3 years ago

I implemented yolo layer via Pytorch in this repository: https://github.com/Tianxiaomo/pytorch-YOLOv4/blob/master/tool/yolo_layer.py So, yolo layer can be automatically embedded into ONNX and TensorRT.

Mish is no problem because you can easily use exp, log and tanh to implement mish via pytorch.

philipp-schmidt commented 3 years ago

Hi, thanks for the explanation. Coincidentally I implemented mish like this yesterday as well. It gave 60% speedup. TensorRT supports Softplus out of the box btw, no need for exp and log.

Can you tell me how well the BatchedNMSPlugin works for you? Is it worth using or is it complicating things a lot?

jstumpin commented 3 years ago

Worth using according to this.