olibartfast / object-detection-inference

C++ object detection inference from video or image input source
MIT License
65 stars 10 forks source link
gstreamer libtensorflow libtorch object-detection onnxruntime rt-detr tensorflow tensorrt yolo

Object Detection Inference

Optional

Notes

To build and compile

mkdir build
cd build
cmake -DDEFAULT_BACKEND=chosen_backend -DCMAKE_BUILD_TYPE=Release ..
cmake --build .

To enable GStreamer support, you can add -DUSE_GSTREAMER=ON when running cmake, like this:

mkdir build
cd build
cmake -DDEFAULT_BACKEND=chosen_backend -DUSE_GSTREAMER=ON -DCMAKE_BUILD_TYPE=Release ..
cmake --build .

This will set the USE_GSTREAMER option to "ON" during the CMake configuration process, enabling GStreamer support in your project.
Remember to replace chosen_backend with your actual backend selection.

Usage

./object-detection-inference \
    --type=<model type> \
    --source="rtsp://cameraip:port/somelivefeed" (or --source="path/to/video.format") (or --source="path/to/image.format") \
    --labels=</path/to/labels/file> \
    --weights=<path/to/model/weights> [--config=</path/to/model/config>] [--min_confidence=<confidence value>] [--use-gpu] [--warmup] [--benchmark]

Parameters

To check all available options:

./object-detection-inference --help

Run the demo example:

Running inference with yolov8s and the TensorRT backend:
build setting for cmake DEFAULT_BACKEND=TENSORRT, then run

./object-detection-inference \
    --type=yolov8 \
    --weights=/path/to/weights/your_yolov8s.engine \
    --source=/path/to/video.mp4 \
    --labels=/path/to/labels.names

Run the inference with rtdetr-l and the Onnx runtime backend:
build setting for cmake DEFAULT_BACKEND=ONNX_RUNTIME, then run

./object-detection-inference  \
    --type=rtdetr \
    --weights=/path/to/weights/your_rtdetr-l.onnx \
    --source=/path/to/video.mp4 \
    --labels=/path/to/labels.names [--use-gpu]

Run with Docker

Building the Docker Image

docker build --rm -t object-detection-inference:<backend_tag> -f docker/Dockerfile.backend .

This command will create a docker image based on the provided docker file.

Running the Docker Container

Replace the wildcards with your desired options and paths:

docker run --rm -v<path_host_data_folder>:/app/data -v<path_host_weights_folder>:/weights -v<path_host_labels_folder>:/labels object-detection-inference:<backend_tag> --type=<model_type> --weights=<weight_according_your_backend> --source=/app/data/<image_or_video> --labels=/labels/<labels_file>.

Available models

References

TO DO

Feedback