roboflow / supervision

We write your reusable computer vision tools. πŸ’œ
https://supervision.roboflow.com
MIT License
24.26k stars 1.8k forks source link
classification coco computer-vision deep-learning hacktoberfest image-processing instance-segmentation low-code machine-learning metrics object-detection oriented-bounding-box pascal-voc python pytorch tensorflow tracking video-processing yolo


[notebooks](https://github.com/roboflow/notebooks) | [inference](https://github.com/roboflow/inference) | [autodistill](https://github.com/autodistill/autodistill) | [maestro](https://github.com/roboflow/multimodal-maestro)
[![version](https://badge.fury.io/py/supervision.svg)](https://badge.fury.io/py/supervision) [![downloads](https://img.shields.io/pypi/dm/supervision)](https://pypistats.org/packages/supervision) [![snyk](https://snyk.io/advisor/python/supervision/badge.svg)](https://snyk.io/advisor/python/supervision) [![license](https://img.shields.io/pypi/l/supervision)](https://github.com/roboflow/supervision/blob/main/LICENSE.md) [![python-version](https://img.shields.io/pypi/pyversions/supervision)](https://badge.fury.io/py/supervision) [![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow/supervision/blob/main/demo.ipynb) [![gradio](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Roboflow/Annotators) [![discord](https://img.shields.io/discord/1159501506232451173?logo=discord&label=discord&labelColor=fff&color=5865f2&link=https%3A%2F%2Fdiscord.gg%2FGbfgXGJ8Bk)](https://discord.gg/GbfgXGJ8Bk) [![built-with-material-for-mkdocs](https://img.shields.io/badge/Material_for_MkDocs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://squidfunk.github.io/mkdocs-material/)
roboflow%2Fsupervision | Trendshift

πŸ‘‹ hello

We write your reusable computer vision tools. Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🀝

πŸ’» install

Pip install the supervision package in a Python>=3.8 environment.

pip install supervision

Read more about conda, mamba, and installing from source in our guide.

πŸ”₯ quickstart

models

Supervision was designed to be model agnostic. Just plug in any classification, detection, or segmentation model. For your convenience, we have created connectors for the most popular libraries like Ultralytics, Transformers, or MMDetection.

import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(...)
model = YOLO("yolov8s.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)

len(detections)
# 5
πŸ‘‰ more model connectors - inference Running with [Inference](https://github.com/roboflow/inference) requires a [Roboflow API KEY](https://docs.roboflow.com/api-reference/authentication#retrieve-an-api-key). ```python import cv2 import supervision as sv from inference import get_model image = cv2.imread(...) model = get_model(model_id="yolov8s-640", api_key=) result = model.infer(image)[0] detections = sv.Detections.from_inference(result) len(detections) # 5 ```

annotators

Supervision offers a wide range of highly customizable annotators, allowing you to compose the perfect visualization for your use case.

import cv2
import supervision as sv

image = cv2.imread(...)
detections = sv.Detections(...)

box_annotator = sv.BoxAnnotator()
annotated_frame = box_annotator.annotate(
  scene=image.copy(),
  detections=detections)

https://github.com/roboflow/supervision/assets/26109316/691e219c-0565-4403-9218-ab5644f39bce

datasets

Supervision provides a set of utils that allow you to load, split, merge, and save datasets in one of the supported formats.

import supervision as sv
from roboflow import Roboflow

project = Roboflow().workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
dataset = project.version(<PROJECT_VERSION>).download("coco")

ds = sv.DetectionDataset.from_coco(
    images_directory_path=f"{dataset.location}/train",
    annotations_path=f"{dataset.location}/train/_annotations.coco.json",
)

path, image, annotation = ds[0]
    # loads image on demand

for path, image, annotation in ds:
    # loads image on demand
πŸ‘‰ more dataset utils - load ```python dataset = sv.DetectionDataset.from_yolo( images_directory_path=..., annotations_directory_path=..., data_yaml_path=... ) dataset = sv.DetectionDataset.from_pascal_voc( images_directory_path=..., annotations_directory_path=... ) dataset = sv.DetectionDataset.from_coco( images_directory_path=..., annotations_path=... ) ``` - split ```python train_dataset, test_dataset = dataset.split(split_ratio=0.7) test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5) len(train_dataset), len(test_dataset), len(valid_dataset) # (700, 150, 150) ``` - merge ```python ds_1 = sv.DetectionDataset(...) len(ds_1) # 100 ds_1.classes # ['dog', 'person'] ds_2 = sv.DetectionDataset(...) len(ds_2) # 200 ds_2.classes # ['cat'] ds_merged = sv.DetectionDataset.merge([ds_1, ds_2]) len(ds_merged) # 300 ds_merged.classes # ['cat', 'dog', 'person'] ``` - save ```python dataset.as_yolo( images_directory_path=..., annotations_directory_path=..., data_yaml_path=... ) dataset.as_pascal_voc( images_directory_path=..., annotations_directory_path=... ) dataset.as_coco( images_directory_path=..., annotations_path=... ) ``` - convert ```python sv.DetectionDataset.from_yolo( images_directory_path=..., annotations_directory_path=..., data_yaml_path=... ).as_pascal_voc( images_directory_path=..., annotations_directory_path=... ) ```

🎬 tutorials

Want to learn how to use Supervision? Explore our how-to guides, end-to-end examples, cheatsheet, and cookbooks!


Dwell Time Analysis with Computer Vision | Real-Time Stream Processing Dwell Time Analysis with Computer Vision | Real-Time Stream Processing

Created: 5 Apr 2024

Learn how to use computer vision to analyze wait times and optimize processes. This tutorial covers object detection, tracking, and calculating time spent in designated zones. Use these techniques to improve customer experience in retail, traffic management, or other scenarios.


Speed Estimation & Vehicle Tracking | Computer Vision | Open Source Speed Estimation & Vehicle Tracking | Computer Vision | Open Source

Created: 11 Jan 2024

Learn how to track and estimate the speed of vehicles using YOLO, ByteTrack, and Roboflow Inference. This comprehensive tutorial covers object detection, multi-object tracking, filtering detections, perspective transformation, speed estimation, visualization improvements, and more.

πŸ’œ built with supervision

Did you build something cool using supervision? Let us know!

https://user-images.githubusercontent.com/26109316/207858600-ee862b22-0353-440b-ad85-caa0c4777904.mp4

https://github.com/roboflow/supervision/assets/26109316/c9436828-9fbf-4c25-ae8c-60e9c81b3900

https://github.com/roboflow/supervision/assets/26109316/3ac6982f-4943-4108-9b7f-51787ef1a69f

πŸ“š documentation

Visit our documentation page to learn how supervision can help you build computer vision applications faster and more reliably.

πŸ† contribution

We love your input! Please see our contributing guide to get started. Thank you πŸ™ to all our contributors!