NNDam / TensorRT-CPP

Wrapped some popular deep learning models with C++
7 stars 3 forks source link

How can convert SCRDF model? #2

Open thangdt277 opened 2 years ago

thangdt277 commented 2 years ago

Can I ask you a question? How do you convert srcdf.pth to onnx with batchNMS Nvidia. Can you give me some guide or tutorial for doing it?

NNDam commented 2 years ago
  1. To convert SCRFD.pth to ONNX with batchNMSPlugin
    • First you need to convert default scrfd.pth to scrfd.onnx (from original insightface source)
    • Add post-processing to original scrfd.onnx : create_post_process.py, we will get scrfd-post-640-640.onnx
    • From scrfd-post-640-640.onnx we add NMS Custom plugin (which I describe bellow) to the head : add_full_nms_plugins.py
  2. Build custom plugin to work with NVIDIA TensorRT: The default (dynamic) batchNMSPlugins of NVIDIA has 2 inputs: boxes & scores which is only compatible with object detection models like yolo series. So I modified default plugin to add 1 more output named nmsed_landmarks, you can check out document here. Just follow this step bellow to compile batchNMSCustomPlugin and get libmyplugin.so:
    cd plugins
    mkdir build && cd build
    cmake ..
    make
  3. Convert ONNX to TensorRT or run code Remember to add env variable at the begining of command
    LD_PRELOAD=libmyplugin.so python run_pipeline.py

    or

    LD_PRELOAD=libmyplugin.so ./run_pipeline
thangdt277 commented 2 years ago

Can i ask more question. Can you push run_pipeline.py into project? Thank for your help!

NNDam commented 2 years ago

run_pipeline.py or runpipeline is just example, you can run demo from this repo by Compile SCRFD sample application

cd SCRFD
mkdir build && cd build
cmake ..
make

You will get sample app named face_detectors in build directory

LD_PRELOAD=libmyplugin.so ./face_detectors
donghoang93 commented 1 year ago
  1. To convert SCRFD.pth to ONNX with batchNMSPlugin
  • First you need to convert default scrfd.pth to scrfd.onnx (from original insightface source)
  • Add post-processing to original scrfd.onnx : create_post_process.py, we will get scrfd-post-640-640.onnx
  • From scrfd-post-640-640.onnx we add NMS Custom plugin (which I describe bellow) to the head : add_full_nms_plugins.py
  1. Build custom plugin to work with NVIDIA TensorRT: The default (dynamic) batchNMSPlugins of NVIDIA has 2 inputs: boxes & scores which is only compatible with object detection models like yolo series. So I modified default plugin to add 1 more output named nmsed_landmarks, you can check out document here. Just follow this step bellow to compile batchNMSCustomPlugin and get libmyplugin.so:
cd plugins
mkdir build && cd build
cmake ..
make
  1. Convert ONNX to TensorRT or run code Remember to add env variable at the begining of command
LD_PRELOAD=libmyplugin.so python run_pipeline.py

or

LD_PRELOAD=libmyplugin.so ./run_pipeline

Hi NNDam,

I finished step 2 and got file: scrfd-post-640-640.onnx.nms.onnx. But I could not convert it to tensorrt file. I used trtexec with this command: export PATH=$PATH:/usr/src/tensorrt/bin trtexec --fp16 --onnx=scrfd-post-640-640.onnx.nms.onnx --saveEngine=scrfd.engine --minShapes=input.1:1x3x640x640 --optShapes=input.1:16x3x640x640 --maxShapes=input.1:32x3x640x640 --shapes=input.1:16x3x640x640 --workspace=10000 log.txt

I attached detail log file so you can see. Please guide me more on this step. Thank you ! log.txt

NNDam commented 1 year ago
  1. To convert SCRFD.pth to ONNX with batchNMSPlugin
  • First you need to convert default scrfd.pth to scrfd.onnx (from original insightface source)
  • Add post-processing to original scrfd.onnx : create_post_process.py, we will get scrfd-post-640-640.onnx
  • From scrfd-post-640-640.onnx we add NMS Custom plugin (which I describe bellow) to the head : add_full_nms_plugins.py
  1. Build custom plugin to work with NVIDIA TensorRT: The default (dynamic) batchNMSPlugins of NVIDIA has 2 inputs: boxes & scores which is only compatible with object detection models like yolo series. So I modified default plugin to add 1 more output named nmsed_landmarks, you can check out document here. Just follow this step bellow to compile batchNMSCustomPlugin and get libmyplugin.so:
cd plugins
mkdir build && cd build
cmake ..
make
  1. Convert ONNX to TensorRT or run code Remember to add env variable at the begining of command
LD_PRELOAD=libmyplugin.so python run_pipeline.py

or

LD_PRELOAD=libmyplugin.so ./run_pipeline

Hi NNDam,

I finished step 2 and got file: scrfd-post-640-640.onnx.nms.onnx. But I could not convert it to tensorrt file. I used trtexec with this command: export PATH=$PATH:/usr/src/tensorrt/bin trtexec --fp16 --onnx=scrfd-post-640-640.onnx.nms.onnx --saveEngine=scrfd.engine --minShapes=input.1:1x3x640x640 --optShapes=input.1:16x3x640x640 --maxShapes=input.1:32x3x640x640 --shapes=input.1:16x3x640x640 --workspace=10000 log.txt

I attached detail log file so you can see. Please guide me more on this step. Thank you ! log.txt

You must add LD_PRELOAD=libmyplugin.so before trtexec command, and maybe can add it as plugin

LD_PRELOAD=libmyplugin.so trtexec --plugins=libmyplugin.so --fp16 --onnx= ...
donghoang93 commented 1 year ago
  1. To convert SCRFD.pth to ONNX with batchNMSPlugin
  • First you need to convert default scrfd.pth to scrfd.onnx (from original insightface source)
  • Add post-processing to original scrfd.onnx : create_post_process.py, we will get scrfd-post-640-640.onnx
  • From scrfd-post-640-640.onnx we add NMS Custom plugin (which I describe bellow) to the head : add_full_nms_plugins.py
  1. Build custom plugin to work with NVIDIA TensorRT: The default (dynamic) batchNMSPlugins of NVIDIA has 2 inputs: boxes & scores which is only compatible with object detection models like yolo series. So I modified default plugin to add 1 more output named nmsed_landmarks, you can check out document here. Just follow this step bellow to compile batchNMSCustomPlugin and get libmyplugin.so:
cd plugins
mkdir build && cd build
cmake ..
make
  1. Convert ONNX to TensorRT or run code Remember to add env variable at the begining of command
LD_PRELOAD=libmyplugin.so python run_pipeline.py

or

LD_PRELOAD=libmyplugin.so ./run_pipeline

Hi NNDam, I finished step 2 and got file: scrfd-post-640-640.onnx.nms.onnx. But I could not convert it to tensorrt file. I used trtexec with this command: export PATH=$PATH:/usr/src/tensorrt/bin trtexec --fp16 --onnx=scrfd-post-640-640.onnx.nms.onnx --saveEngine=scrfd.engine --minShapes=input.1:1x3x640x640 --optShapes=input.1:16x3x640x640 --maxShapes=input.1:32x3x640x640 --shapes=input.1:16x3x640x640 --workspace=10000 log.txt I attached detail log file so you can see. Please guide me more on this step. Thank you ! log.txt

You must add LD_PRELOAD=libmyplugin.so before trtexec command, and maybe can add it as plugin

LD_PRELOAD=libmyplugin.so trtexec --plugins=libmyplugin.so --fp16 --onnx= ...

Thank you NNDam. I added plugin and converted it.