RoadSage is an advanced driver assistance system designed to enhance driving safety and convenience. It detects objects on the road, identifies drivable areas and lanes, and provides depth information about various objects. Utilizing state-of-the-art machine learning models, RoadSage ensures high accuracy and real-time performance.
RoadSage employs YOLOP (You Only Look Once for Panoptic driving), a robust and efficient model for object detection, drivable area segmentation, and lane detection.
Two depth estimation models were evaluated:
git clone https://github.com/Grifind0r/RoadSage--Intelligent-Driver-Assistance-System.git
cd RoadSage--Intelligent-Driver-Assistance-System
pip install -r requirements.txt
conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch
Download these files in the 'RoadSage--Intelligent-Driver-Assistance-System' folder after cloning the repository. Files are: LiheYoung, inference, weights, vinvino02 Download
We provide two testing methods:
Store the images or videos in the --source
directory and save the inference results to inference/outputs`:
python tools/demo.py --source inference/images inference/outputs
If there is a camera connected to your computer, set the source as the camera number (default is 0):
python tools/demo.py --source 0
tools/depths.py
: Contains the depth estimation models and related code.tools/demo.py
: Script to run the demo tests.README.md
: Project documentation.Contributions are welcome! Please fork the repository and submit a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.