Open thangdt277 opened 2 years ago
scrfd.pth
to scrfd.onnx
(from original insightface source)scrfd.onnx
: create_post_process.py, we will get scrfd-post-640-640.onnx
scrfd-post-640-640.onnx
we add NMS Custom plugin (which I describe bellow) to the head : add_full_nms_plugins.pybatchNMSPlugins
of NVIDIA has 2 inputs: boxes
& scores
which is only compatible with object detection models like yolo series. So I modified default plugin to add 1 more output named nmsed_landmarks
, you can check out document here. Just follow this step bellow to compile batchNMSCustomPlugin
and get libmyplugin.so
:
cd plugins
mkdir build && cd build
cmake ..
make
LD_PRELOAD=libmyplugin.so python run_pipeline.py
or
LD_PRELOAD=libmyplugin.so ./run_pipeline
Can i ask more question. Can you push run_pipeline.py into project? Thank for your help!
run_pipeline.py
or runpipeline
is just example, you can run demo from this repo by
Compile SCRFD sample application
cd SCRFD
mkdir build && cd build
cmake ..
make
You will get sample app named face_detectors in build
directory
LD_PRELOAD=libmyplugin.so ./face_detectors
- To convert SCRFD.pth to ONNX with batchNMSPlugin
- First you need to convert default
scrfd.pth
toscrfd.onnx
(from original insightface source)- Add post-processing to original
scrfd.onnx
: create_post_process.py, we will getscrfd-post-640-640.onnx
- From
scrfd-post-640-640.onnx
we add NMS Custom plugin (which I describe bellow) to the head : add_full_nms_plugins.py
- Build custom plugin to work with NVIDIA TensorRT: The default (dynamic)
batchNMSPlugins
of NVIDIA has 2 inputs:boxes
&scores
which is only compatible with object detection models like yolo series. So I modified default plugin to add 1 more output namednmsed_landmarks
, you can check out document here. Just follow this step bellow to compilebatchNMSCustomPlugin
and getlibmyplugin.so
:cd plugins mkdir build && cd build cmake .. make
- Convert ONNX to TensorRT or run code Remember to add env variable at the begining of command
LD_PRELOAD=libmyplugin.so python run_pipeline.py
or
LD_PRELOAD=libmyplugin.so ./run_pipeline
Hi NNDam,
I finished step 2 and got file: scrfd-post-640-640.onnx.nms.onnx. But I could not convert it to tensorrt file. I used trtexec with this command: export PATH=$PATH:/usr/src/tensorrt/bin trtexec --fp16 --onnx=scrfd-post-640-640.onnx.nms.onnx --saveEngine=scrfd.engine --minShapes=input.1:1x3x640x640 --optShapes=input.1:16x3x640x640 --maxShapes=input.1:32x3x640x640 --shapes=input.1:16x3x640x640 --workspace=10000 log.txt
I attached detail log file so you can see. Please guide me more on this step. Thank you ! log.txt
- To convert SCRFD.pth to ONNX with batchNMSPlugin
- First you need to convert default
scrfd.pth
toscrfd.onnx
(from original insightface source)- Add post-processing to original
scrfd.onnx
: create_post_process.py, we will getscrfd-post-640-640.onnx
- From
scrfd-post-640-640.onnx
we add NMS Custom plugin (which I describe bellow) to the head : add_full_nms_plugins.py
- Build custom plugin to work with NVIDIA TensorRT: The default (dynamic)
batchNMSPlugins
of NVIDIA has 2 inputs:boxes
&scores
which is only compatible with object detection models like yolo series. So I modified default plugin to add 1 more output namednmsed_landmarks
, you can check out document here. Just follow this step bellow to compilebatchNMSCustomPlugin
and getlibmyplugin.so
:cd plugins mkdir build && cd build cmake .. make
- Convert ONNX to TensorRT or run code Remember to add env variable at the begining of command
LD_PRELOAD=libmyplugin.so python run_pipeline.py
or
LD_PRELOAD=libmyplugin.so ./run_pipeline
Hi NNDam,
I finished step 2 and got file: scrfd-post-640-640.onnx.nms.onnx. But I could not convert it to tensorrt file. I used trtexec with this command: export PATH=$PATH:/usr/src/tensorrt/bin trtexec --fp16 --onnx=scrfd-post-640-640.onnx.nms.onnx --saveEngine=scrfd.engine --minShapes=input.1:1x3x640x640 --optShapes=input.1:16x3x640x640 --maxShapes=input.1:32x3x640x640 --shapes=input.1:16x3x640x640 --workspace=10000 log.txt
I attached detail log file so you can see. Please guide me more on this step. Thank you ! log.txt
You must add LD_PRELOAD=libmyplugin.so
before trtexec command, and maybe can add it as plugin
LD_PRELOAD=libmyplugin.so trtexec --plugins=libmyplugin.so --fp16 --onnx= ...
- To convert SCRFD.pth to ONNX with batchNMSPlugin
- First you need to convert default
scrfd.pth
toscrfd.onnx
(from original insightface source)- Add post-processing to original
scrfd.onnx
: create_post_process.py, we will getscrfd-post-640-640.onnx
- From
scrfd-post-640-640.onnx
we add NMS Custom plugin (which I describe bellow) to the head : add_full_nms_plugins.py
- Build custom plugin to work with NVIDIA TensorRT: The default (dynamic)
batchNMSPlugins
of NVIDIA has 2 inputs:boxes
&scores
which is only compatible with object detection models like yolo series. So I modified default plugin to add 1 more output namednmsed_landmarks
, you can check out document here. Just follow this step bellow to compilebatchNMSCustomPlugin
and getlibmyplugin.so
:cd plugins mkdir build && cd build cmake .. make
- Convert ONNX to TensorRT or run code Remember to add env variable at the begining of command
LD_PRELOAD=libmyplugin.so python run_pipeline.py
or
LD_PRELOAD=libmyplugin.so ./run_pipeline
Hi NNDam, I finished step 2 and got file: scrfd-post-640-640.onnx.nms.onnx. But I could not convert it to tensorrt file. I used trtexec with this command: export PATH=$PATH:/usr/src/tensorrt/bin trtexec --fp16 --onnx=scrfd-post-640-640.onnx.nms.onnx --saveEngine=scrfd.engine --minShapes=input.1:1x3x640x640 --optShapes=input.1:16x3x640x640 --maxShapes=input.1:32x3x640x640 --shapes=input.1:16x3x640x640 --workspace=10000 log.txt I attached detail log file so you can see. Please guide me more on this step. Thank you ! log.txt
You must add
LD_PRELOAD=libmyplugin.so
before trtexec command, and maybe can add it as pluginLD_PRELOAD=libmyplugin.so trtexec --plugins=libmyplugin.so --fp16 --onnx= ...
Thank you NNDam. I added plugin and converted it.
Can I ask you a question? How do you convert srcdf.pth to onnx with batchNMS Nvidia. Can you give me some guide or tutorial for doing it?