NVIDIA-AI-IOT / trt_pose

Real-time pose estimation accelerated with NVIDIA TensorRT
MIT License
981 stars 293 forks source link

Pytorch. Is it really neeeded ? #151

Open Adblu opened 3 years ago

Adblu commented 3 years ago

I have jetson nano with yocto OS. I understand that model is in trt format, however very often PyTorch is called to handle side operations. Is it possible to get rid of pytorch at all ?

jaybdub commented 3 years ago

Hi Adblu,

Thanks for reaching out!

The following project used trt_pose with deepstream (without PyTorch).

https://github.com/NVIDIA-AI-IOT/deepstream_pose_estimation

Using this without PyTorch requires the following

  1. The engine must be executed using the TensorRT Python / C++ api directly.

    1. See trt_pose/utils/export_for_isaac.py for an example using ONNX to export the model
    2. Alternatively, if you use torch2trt to convert the model, you can serialize the engine
    with open('model.engine', 'wb') as f:
        f.write(model_trt.engine.serialize())
  2. The post-processing in this project must be extracted and compiled without using PyTorch extension library for binding

Let me know if this helps or you have any other questions.

Best, John

shekarneo commented 2 years ago

Is there any sample code to use deepstream pose estimation using python

jaybdub commented 2 years ago

Hi Shekarneo,

Thanks for reaching out!

Unfortunately, I'm not aware of a Python deepstream example which uses trt_pose directly.

Perhaps the following project would be a useful reference.

https://github.com/NVIDIA-AI-IOT/deepstream_python_apps

Best, John

shekarneo commented 2 years ago

Hi jaybdub

thanks for the reply, i am able to run trt pose in deepstream using python, but output parser is not working, and need help to port or use cpp parsing code in python.