I followed the steps in the GitHub readme for deployment, but due to my Jetson Orin Nano being the 4GB version and having insufficient memory, TensorRT couldn't successfully convert model to the engine file. Therefore, I directly copied the entire modified DeepStream LPR app project, including the engine file, from Jetson AGX Orin to Jetson Orin Nano and ran it directly:
I followed the steps in the GitHub readme for deployment, but due to my Jetson Orin Nano being the 4GB version and having insufficient memory, TensorRT couldn't successfully convert model to the engine file. Therefore, I directly copied the entire modified DeepStream LPR app project, including the engine file, from Jetson AGX Orin to Jetson Orin Nano and ran it directly:
./deepstream-lpr-app 2 2 0 infer ~/889_1699793313.mp4 output.264
And it worked!