bharath5673 / YOLOv8-3D

YOLOv8-3D is a LowCode, Simple 2D and 3D Bounding Box Object Detection and Tracking , Python 3.10
165 stars 32 forks source link
3dobject adas kitti-dataset monocular-3d-detection multiobject-tracking nuscenes perception-systems pytorch tensorflow tracking ultralytics yolo yolov8

YOLOv8-3D (tracker)

Description:

YOLOv8-3D is a lightweight and user-friendly library designed for efficient 2D and 3D bounding box object detection in Advanced Driver Assistance Systems (ADAS). With its intuitive API and comprehensive features, EasyADAS makes it straightforward to integrate object detection capabilities into your ADAS projects.

YOLOv8-3D

This API supports for easy understanding and integrate 3D perception, systems can make more informed decisions and operate effectively in complex, real-world environments.

Key Features:

special features

augmentations for better training, automated backup training and results plot

Get started

Prerequisites

Usage

Installation

  1. Clone this repository.
  2. Install the required dependencies

Run

for training

conda create -n test1 python=3.10 -y
conda activate test1
pip install tensorflow

For more detailed tensorflow gpu installation instructions and options, refer to this documentation.

####### select model on train.py  ########
# select_model = 'resnet50'
# select_model ='resnet101'
# select_model = 'resnet152'
# select_model = 'vgg11'
# select_model = 'vgg16'
# select_model = 'vgg19'
# select_model = 'efficientnetb0'
# select_model = 'efficientnetb5'
select_model = 'mobilenetv2'
###[INFO] set num of iterations to run (train.py) on (run_train.sh) file /// this automatically saves training info for every 20 epochs.
bash run_train.sh

recommended new environment to infer models only on cpu

for testing

conda create -n test2 python=3.10 -y
conda activate test2
pip install tensorflow ultralytics
python demo.py


realtime BEV plot

YOLOv8-3D
set
## BEV_plot = True
## TracK = True

Contributing

Contributions are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.

Acknowledgements

Expand * [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet) * [https://github.com/WongKinYiu/yolor](https://github.com/WongKinYiu/yolor) * [https://github.com/WongKinYiu/PyTorch_YOLOv4](https://github.com/WongKinYiu/PyTorch_YOLOv4) * [https://github.com/WongKinYiu/ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) * [https://github.com/Megvii-BaseDetection/YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) * [https://github.com/ultralytics/yolov3](https://github.com/ultralytics/yolov3) * [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5) * [https://github.com/DingXiaoH/RepVGG](https://github.com/DingXiaoH/RepVGG) * [https://github.com/JUGGHM/OREPA_CVPR2022](https://github.com/JUGGHM/OREPA_CVPR2022) * [https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose](https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose) * [https://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d](https://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d) * [https://opencv.org/](https://opencv.org/) * [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics) * [https://github.com/lzccccc/3d-bounding-box-estimation-for-autonomous-driving](https://github.com/lzccccc/3d-bounding-box-estimation-for-autonomous-driving) * [https://github.com/lzccccc/SMOKE](https://github.com/lzccccc/SMOKE) * [https://github.com/abhi1kumar/DEVIANT.git](https://github.com/abhi1kumar/DEVIANT.git)