This project is an Automatic License Plate Recognition (ALPR) system designed for edge devices and optimized for high performance. It includes an AI Engine for image recognition, a frontend and backend for management, MQTT Databus for communication, and GPIO interfacing for hardware control.
The ALPR System is designed to provide a comprehensive solution for license plate recognition and management. It utilizes a combination of AI image recognition, backend APIs, frontend interfaces, and hardware interfacing to deliver a full-featured ALPR system.
![]() |
---|
Schematic representation of ALPR System. |
![]() |
---|
ALPR Manager – UI. |
Clone the repository:
git clone https://github.com/CuAuPro/alpr-system.git
cd alpr-system
Setup backend:
cd backend
npm install
npm run build
Setup frontend:
cd ../frontend
npm install
ng build --configuration=production
Setup MQTT (for additional requirements)
cd ../databus
# Follow instructions in mqtt/README.md
Setup Python AI scripts (for additional requirements):
cd ../ai-engine
# Follow instructions in ai-engine/README.md
Setup Python GPIO scripts (for additional requirements):
cd ../gpio-handler
# Follow instructions in gpio-handler/README.md
For detailed instructions on backend setup and usage, refer to backend/README.md
.
For detailed instructions on frontend setup and usage, refer to frontend/README.md
.
For detailed instructions on Databus setup and usage, refer to databus/README.md
.
For detailed instructions on AI setup and usage, refer to ai-engine/README.md
.
For detailed instructions on GPIO setup and usage, refer to gpio-handler/README.md
.
Create a dev.env
file in the backend/env
directory with the following contents:
NODEJS="DEVEL"
PORT=443
JWT_SECRET=your_jwt_secret
Create a prod.env
file in the backend/env directory with the following contents:
NODEJS="PROD"
PORT=443
JWT_SECRET=your_jwt_secret
Dockerfiles are provided for both the backend and frontend components. You can use Docker Compose to easily build and run the entire system in containers.
Docker's default-runtime
to nvidia
, so that the NVCC compiler and GPU are available during docker build
operations. Add "default-runtime": "nvidia"
to your /etc/docker/daemon.json
configuration file before attempting to build the containers:
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
Then restart the Docker service, or reboot your system before proceeding:
$ sudo systemctl restart docker
You can then confirm the changes by looking under docker info
$ sudo docker info | grep 'Default Runtime'
Default Runtime: nvidia
You also need to create persistend database file. To create a directory in and set up a volume for the backend alpr.db, follow these steps:
Create Directory: Open a terminal and run the following command to create a directory named alpr-system
in /var/opt/docker/database
:
sudo mkdir -p /var/opt/docker/alpr-system/database
Copy initialized alpr.db
to /var/opt/docker/alpr-system/database
.
Set Up Volume: After creating the directory, you can specify a volume for the backend alpr.db in your Docker Compose
file. Add the following volume configuration under the backend service in your docker-compose.yml
file:
volumes:
- /var/opt/docker/alpr-system/database:/app/backend/database
To start the application using Docker Compose:
docker-compose up --build
This project is licensed under the terms specified in the LICENSE file.