Abdullah-Bin-Naeem / RoadSage--Intelligent-Driver-Assistance-System

RoadSage is an advanced driver assistance system designed to enhance driving safety and convenience. It detects objects on the road, identifies drivable areas and lanes, and provides depth information about various objects. Utilizing state-of-the-art machine learning models, RoadSage ensures high accuracy and real-time performance.
MIT License
0 stars 0 forks source link

RoadSage - Intelligent Driver Assistance System

Introduction

RoadSage is an advanced driver assistance system designed to enhance driving safety and convenience. It detects objects on the road, identifies drivable areas and lanes, and provides depth information about various objects. Utilizing state-of-the-art machine learning models, RoadSage ensures high accuracy and real-time performance.

RoadSage Demo

Features

Object Detection and Segmentation

RoadSage employs YOLOP (You Only Look Once for Panoptic driving), a robust and efficient model for object detection, drivable area segmentation, and lane detection.

Depth Estimation

Two depth estimation models were evaluated:

  1. Global Local Networks: Initially tested but resulted in less efficient results in depths.
  2. Depth Anything: Provided better depth estimation results and was integrated into the final system.

    Implementation and Optimization

  1. Clone the repository:
    git clone https://github.com/Grifind0r/RoadSage--Intelligent-Driver-Assistance-System.git
    cd RoadSage--Intelligent-Driver-Assistance-System
  2. Install the required dependencies:
    pip install -r requirements.txt
  3. Ensure you have the correct versions of PyTorch and torchvision:
    conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch

Usage

Download these files first

Download these files in the 'RoadSage--Intelligent-Driver-Assistance-System' folder after cloning the repository. Files are: LiheYoung, inference, weights, vinvino02 Download

Demo Test

We provide two testing methods:

Folder

Store the images or videos in the --source directory and save the inference results to inference/outputs`:

python tools/demo.py --source inference/images inference/outputs

Camera

If there is a camera connected to your computer, set the source as the camera number (default is 0):

python tools/demo.py --source 0

File Structure

Contributing

Contributions are welcome! Please fork the repository and submit a pull request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements