Closed monajalal closed 6 months ago
Hi Mona. You can check out Isaac ROS inference pipeline for the C++ files.
Also, NotImplementedError
is because TensorRT engine generation and inference is supported with TAO-Deploy.
Hi Arun,
Thanks a lot for your response. I move forward accordingly and so far I am trying to load an engine file I created with TAO Deploy with TRT. You can see my issue here. https://github.com/NVIDIA/tao_deploy/issues/8
I close this one.
@Arun-George-Zachariah
Hi Arun
My intention is to use C++ for getting inference on the Engine file.
I was wondering that instead of
for centerpose ipynb you may also have a C++ script?
Also, could you please point me to pieces of code that are used for this command?
I can't use TAO on our Jetson and need to use C++.
Thanks for any help
P.S.
When I look at the centerpose.ipynb there is no python code if I look at the tao_pytorch_backend/nvidia_tao_pytorch/cv/centerpose/scripts/inference.py
I see there is no code for getting an inference while I have access to an Engine file.
I am converting the onnx to engine on my end using our company C++ code.
However, I need to know what is the C++ code for getting inference once I have the Engine file.
We cannot use NVIDIA tao on our backend that we use in NVIDIA Jetson Xavier NX