Train and Inference your custom YOLO-NAS model by Single Command Line
A Next-Generation, Object Detection Foundational Model generated by Deciβs Neural Architecture Search Technology
Deci is thrilled to announce the release of a new object detection model, YOLO-NAS - a game-changer in the world of object detection, providing superior real-time object detection capabilities and production-ready performance. Deci's mission is to provide AI teams with tools to remove development barriers and attain efficient inference performance more quickly.
In terms of pure numbers, YOLO-NAS is ~0.5 mAP point more accurate and 10-20% faster than equivalent variants of YOLOv8 and YOLOv7.
Model | mAP | Latency (ms) |
---|---|---|
YOLO-NAS S | 47.5 | 3.21 |
YOLO-NAS M | 51.55 | 5.85 |
YOLO-NAS L | 52.22 | 7.87 |
YOLO-NAS S INT-8 | 47.03 | 2.36 |
YOLO-NAS M INT-8 | 51.0 | 3.78 |
YOLO-NAS L INT-8 | 52.1 | 4.78 |
mAP numbers in table reported for COCO 2017 Val dataset and latency benchmarked for 640x640 images on Nvidia T4 GPU.
YOLO-NAS's architecture employs quantization-aware blocks and selective quantization for optimized performance. When converted to its INT8 quantized version, YOLO-NAS experiences a smaller precision drop (0.51, 0.65, and 0.45 points of mAP for S, M, and L variants) compared to other models that lose 1-2 mAP points during quantization. These techniques culminate in innovative architecture with superior object detection capabilities and top-notch performance.
git clone https://github.com/naseemap47/YOLO-NAS.git
cd YOLO-NAS
Create anaconda python environment
conda create -n yolo-nas python=3.9 -y
conda activate yolo-nas
PyTorch v1.11.0 Installation
# conda installation
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch -y
/// OR
# PIP installation
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
Quantization Aware Training
# For Quantization Aware Training
pip install pytorch-quantization==2.1.2 --extra-index-url https://pypi.ngc.nvidia.com
Install Super-Gradients
pip install super-gradients==3.1.3
Your custom dataset should be in COCO JSON data format.
To convert YOLO (.txt) / PASCAL VOC (.XML) format to COCO JSON.
Using JSON Converter https://github.com/naseemap47/autoAnnoter#10-yolo_to_jsonpy
COCO Data Format:
βββ Dataset
| βββ annotations
β β βββ train.json
β β βββ valid.json
β β βββ test.json
β βββ train
β β βββ 1.jpg
β β βββ abc.png
| | βββ ....
β βββ val
β β βββ 2.jpg
β β βββ fram.png
| | βββ ....
β βββ test
β β βββ img23.jpeg
β β βββ 50.jpg
| | βββ ....
To training custom model using your custom data. You need to create data.yaml Example:
Dir: 'Data'
images:
test: test
train: train
val: valid
labels:
test: annotations/test.json
train: annotations/train.json
val: annotations/valid.json
You can train your YOLO-NAS model with Single Command Line
Example:
python3 train.py --data /dir/dataset/data.yaml --batch 6 --epoch 100 --model yolo_nas_m --size 640
# From Pre-trained weight
python3 train.py --data /dir/dataset/data.yaml --batch 6 --epoch 100 --model yolo_nas_m --size 640 \
--weight runs/train2/ckpt_latest.pth
Example:
python3 train.py --data /dir/dataset/data.yaml --batch 6 --epoch 100 --model yolo_nas_m --size 640 \
--weight runs/train2/ckpt_latest.pth --resume
Example:
python3 qat.py --data /dir/dataset/data.yaml --weight runs/train2/ckpt_best.pth --batch 6 --epoch 100 --model yolo_nas_m --size 640
You can Inference your YOLO-NAS model with Single Command Line
Example:
# For COCO YOLO-NAS Model
python3 inference.py --model yolo_nas_s --weight coco --source 0 # Camera
python3 inference.py --model yolo_nas_m --weight coco --source /test/video.mp4 --conf 0.66 # video
python3 inference.py --num 3 --model yolo_nas_m --weight /runs/train4/ckpt_best.pth --source /test/video.mp4 --conf 0.66 # video
--source /test/sample.jpg --conf 0.5 --save # Image save
--source /test/video.mp4 --conf 0.75 --hide # to save and hide video window
--source 0 --conf 0.45 # Camera
--source 'rtsp://link' --conf 0.25 --save # save RTSP video stream
python3 batch.py --num 3 --model yolo_nas_m --weight /runs/train4/ckpt_best.pth --source '/test/video.mp4' '/test/video23.mp4' # videos
--source 0 2 --conf 0.45 --full # web-Cameras with full screen
--source 'rtsp://link' 'rtsp://link3' --conf 0.25 # RTSPs video stream