Closed tugbakara closed 1 year ago
You just want to make use of TensorRT to build the engine, am I correct? If yes then TensorRT should be already installed after you flash the Jetpack, just to enable the CUDA-X SDK If I remember correctly. Deepstream is also an option when flashing the board.
Also I would suggest to use the latest release, you will get better performance and layer support.
you shouold modify -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ to tensorrt's lib directory ref: https://github.com/NVIDIA/TensorRT/issues/928#issuecomment-861790359
closing since no activity for more than 3 weeks, pls reopen if you still have question, thanks all!
Hi, from this [website]https://docs.nvidia.com/tao/tao-toolkit/text/ds_tao/yolo_v4_tiny_ds.html#:~:text=y%0Asudo%20ldconfig-,TensorRT%20OSS%20on%20Jetson%20(ARM64),-Install%20Cmake%20(%3E%3D3.13) I follow steps to install TensorRT OSS. In the step
/usr/local/bin/cmake .. -DGPU_ARCHS=53 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=
pwd/out
I got this error:I searched this issue from nvidia forum and there, there is similar and same issues but their solutions didn't solve my problem, don't know why. Is there any package you could advice to convert onnx to engine or to use onnx in deepstream directly? Note: I have tried build into my Jetson Nano directly w/o any container.