The sample has been moved to https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/apps/tao_others/deepstream_lpr_app Please refer to the new link.
This sample is to show how to use graded models for detection and classification with DeepStream SDK version not less than 5.0.1. The models in this sample are all TAO3.0 models.
PGIE(car detection) -> SGIE(car license plate detection) -> SGIE(car license plate recognization)
This pipeline is based on three TAO models below
More details for TAO3.0 LPD and LPR models and TAO training, please refer to TAO document.
Below table shows the end-to-end performance of processing 1080p videos with this sample application.
Device | Number of streams | Batch Size | Total FPS |
---|---|---|---|
Jetson Nano | 1 | 1 | 9.2 |
Jetson NX | 3 | 3 | 80.31 |
Jetson Xavier | 5 | 5 | 146.43 |
Jetson Orin | 5 | 5 | 341.65 |
T4 | 14 | 14 | 447.15 |
Make sure deepstream-test1 sample can run successful to verify your DeepStream installation
Download x86 or Jetson tao-converter which is compatible to your platform from the links in https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/resources/tao-converter/version.
The LPR sample application can work as Triton client on x86 platforms.
// SSH
git clone git@github.com:NVIDIA-AI-IOT/deepstream_lpr_app.git
// or HTTPS
git clone https://github.com/NVIDIA-AI-IOT/deepstream_lpr_app.git
All models can be downloaded with the following commands:
cd deepstream_lpr_app/
For US car plate recognition
./download_convert.sh us 0 #if DeepStream SDK 5.0.1, use ./download_convert.sh us 1
For Chinese car plate recognition
./download_convert.sh ch 0 #if DeepStream SDK 5.0.1, use ./download_convert.sh ch 1
From DeepStream 6.1, LPR sample application supports three inferencing modes:
The following instructions are only needed for the LPR sample application working with gst-nvinferserver inferencing on x86 platforms as the Triton client. For LPR sample application works with nvinfer mode, please go to Build and Run part directly.
The Triton Inference Server libraries are required to be installed if the DeepStream LPR sample application should work as the Triton client, the Triton client document instructs how to install the necessary libraries. A easier way is to run DeepStream application in the DeepStream Triton container.
Setting up Triton Inference Server for native cAPI inferencing, please refer to triton_server.md.
Setting up Triton Inference Server for gRPC inferencing, please refer to triton_server_grpc.md.
make
cd deepstream-lpr-app
For US car plate recognition
cp dict_us.txt dict.txt
For Chinese car plate recognition
cp dict_ch.txt dict.txt
Start to run the application
./deepstream-lpr-app <1:US car plate model|2: Chinese car plate model> \
<1: output as h264 file| 2:fakesink 3:display output> <0:ROI disable|1:ROI enable> <infer|triton|tritongrpc> \
<input mp4 file name> ... <input mp4 file name> <output file name>
Or run with YAML config file.
./deepstream-lpr-app <app YAML config file>
A sample of US car plate recognition:
./deepstream-lpr-app 1 2 0 infer us_car_test2.mp4 us_car_test2.mp4 output.264
Or run with YAML config file.
./deepstream-lpr-app lpr_app_infer_us_config.yml
A sample of Chinese car plate recognition:
./deepstream-lpr-app 2 2 0 infer ch_car_test.mp4 ch_car_test.mp4 output.264
A sample of US car plate recognition:
./deepstream-lpr-app 1 2 0 triton us_car_test2.mp4 us_car_test2.mp4 output.264
Or run with YAML config file after modify triton part in yml file.
./deepstream-lpr-app lpr_app_triton_us_config.yml
A sample of Chinese car plate recognition:
./deepstream-lpr-app 2 2 0 triton ch_car_test2.mp4 ch_car_test2.mp4 output.264
Or run with YAML config file after modify triton part in yml file.
./deepstream-lpr-app lpr_app_triton_ch_config.yml
A sample of US car plate recognition:
./deepstream-lpr-app 1 2 0 tritongrpc us_car_test2.mp4 us_car_test2.mp4 output.264
Or run with YAML config file after modify triton part in yml file.
./deepstream-lpr-app lpr_app_tritongrpc_us_config.yml
A sample of Chinese car plate recognition:
./deepstream-lpr-app 2 2 0 tritongrpc ch_car_test2.mp4 ch_car_test2.mp4 output.264
Or run with YAML config file after modify triton part in yml file.
./deepstream-lpr-app lpr_app_tritongrpc_ch_config.yml