NVIDIA / tao_tutorials

Quick start scripts and tutorial notebooks to get started with TAO Toolkit
Apache License 2.0
46 stars 12 forks source link

C++ inference code for getting inference on the created Engine file for CenterPose #7

Closed monajalal closed 6 months ago

monajalal commented 6 months ago

@Arun-George-Zachariah

Hi Arun

My intention is to use C++ for getting inference on the Engine file.

I was wondering that instead of

# Inference with generated TensorRT engine
!tao deploy centerpose inference -e $SPECS_DIR/infer.yaml \
                              inference.trt_engine=$RESULTS_DIR/gen_trt_engine/centerpose_model.engine \
                              results_dir=$RESULTS_DIR/

for centerpose ipynb you may also have a C++ script?

Also, could you please point me to pieces of code that are used for this command?

I can't use TAO on our Jetson and need to use C++.

Thanks for any help

P.S.

When I look at the centerpose.ipynb there is no python code if I look at the tao_pytorch_backend/nvidia_tao_pytorch/cv/centerpose/scripts/inference.py

    elif model_path.endswith('.engine'):
        raise NotImplementedError("TensorRT inference is supported through tao-deploy. "
                                  "Please use tao-deploy to generate TensorRT enigne and run inference.")

I see there is no code for getting an inference while I have access to an Engine file.

I am converting the onnx to engine on my end using our company C++ code.

However, I need to know what is the C++ code for getting inference once I have the Engine file.

We cannot use NVIDIA tao on our backend that we use in NVIDIA Jetson Xavier NX

Arun-George-Zachariah commented 6 months ago

Hi Mona. You can check out Isaac ROS inference pipeline for the C++ files.

Also, NotImplementedError is because TensorRT engine generation and inference is supported with TAO-Deploy.

monajalal commented 6 months ago

Hi Arun,

Thanks a lot for your response. I move forward accordingly and so far I am trying to load an engine file I created with TAO Deploy with TRT. You can see my issue here. https://github.com/NVIDIA/tao_deploy/issues/8

I close this one.