Open Adblu opened 3 years ago
Hi Adblu,
Thanks for reaching out!
The following project used trt_pose with deepstream (without PyTorch).
https://github.com/NVIDIA-AI-IOT/deepstream_pose_estimation
Using this without PyTorch requires the following
The engine must be executed using the TensorRT Python / C++ api directly.
with open('model.engine', 'wb') as f:
f.write(model_trt.engine.serialize())
The post-processing in this project must be extracted and compiled without using PyTorch extension library for binding
Let me know if this helps or you have any other questions.
Best, John
Is there any sample code to use deepstream pose estimation using python
Hi Shekarneo,
Thanks for reaching out!
Unfortunately, I'm not aware of a Python deepstream example which uses trt_pose directly.
Perhaps the following project would be a useful reference.
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
Best, John
Hi jaybdub
thanks for the reply, i am able to run trt pose in deepstream using python, but output parser is not working, and need help to port or use cpp parsing code in python.
I have jetson nano with yocto OS. I understand that model is in trt format, however very often PyTorch is called to handle side operations. Is it possible to get rid of pytorch at all ?