What it is. Yet another implementation of Ultralytics's YOLOv5. yolort aims to make the training and inference of the object detection task integrate more seamlessly together. yolort now adopts the same model structure as the official YOLOv5. The significant difference is that we adopt the dynamic shape mechanism, and within this, we can embed both pre-processing (letterbox) and post-processing (nms) into the model graph, which simplifies the deployment strategy. In this sense, yolort makes it possible to deploy the object detection more easily and friendly on LibTorch
, ONNX Runtime
, TVM
, TensorRT
and so on.
About the code. Follow the design principle of detr:
object detection should not be more difficult than classification, and should not require complex libraries for training and inference.
yolort
is very simple to implement and experiment with. Do you like the implementation of torchvision's faster-rcnn, retinanet or detr? Do you like yolov5? You'll love yolort
!
TensorRT
C++ interface example. Thanks to Shiquan.TensorRT
, and inferencing with TensorRT
Python interface.ONNX Runtime
C++ interface example. Thanks to Fidan.TVM
compile and inference notebooks.ONNX
, and inferencing with ONNX Runtime
Python interface.LibTorch
C++ inference example.TorchScript
model.There are no extra compiled components in yolort
and package dependencies are minimal, so the code is very simple to use.
Above all, follow the official instructions to install PyTorch 1.8.0+ and torchvision 0.9.0+
Installation via pip
Simple installation from PyPI
pip install -U yolort
Or from Source
# clone yolort repository locally
git clone https://github.com/zhiqwang/yolort.git
cd yolort
# install in editable mode
pip install -e .
Install pycocotools (for evaluation on COCO):
pip install -U 'git+https://github.com/ppwwyyxx/cocoapi.git#subdirectory=PythonAPI'
To read a source of image(s) and detect its objects 🔥
from yolort.models import yolov5s
# Load model
model = yolov5s(pretrained=True, score_thresh=0.45)
model.eval()
# Perform inference on an image file
predictions = model.predict("bus.jpg")
# Perform inference on a list of image files
predictions = model.predict(["bus.jpg", "zidane.jpg"])
torch.hub
The models are also available via torch hub, to load yolov5s
with pretrained weights simply do:
model = torch.hub.load("zhiqwang/yolort:main", "yolov5s", pretrained=True)
The following is the interface for loading the checkpoint weights trained with ultralytics/yolov5
. Please see our documents on what we share and how we differ from yolov5 for more details.
from yolort.models import YOLOv5
# Download checkpoint from https://github.com/ultralytics/yolov5/releases/download/v6.0/yolov5s.pt
ckpt_path_from_ultralytics = "yolov5s.pt"
model = YOLOv5.load_from_yolov5(ckpt_path_from_ultralytics, score_thresh=0.25)
model.eval()
img_path = "test/assets/bus.jpg"
predictions = model.predict(img_path)
We provide a tutorial to demonstrate how the model is converted into torchscript
. And we provide a C++ example of how to do inference with the serialized torchscript
model.
We provide a pipeline for deploying yolort with ONNX Runtime.
from yolort.runtime import PredictorORT
# Load the serialized ONNX model
engine_path = "yolov5n6.onnx"
y_runtime = PredictorORT(engine_path, device="cpu")
# Perform inference on an image file
predictions = y_runtime.predict("bus.jpg")
Please check out this tutorial to use yolort's ONNX model conversion and ONNX Runtime inferencing. And you can use the example for ONNX Runtime C++ interface.
The pipeline for TensorRT deployment is also very easy to use.
import torch
from yolort.runtime import PredictorTRT
# Load the serialized TensorRT engine
engine_path = "yolov5n6.engine"
device = torch.device("cuda")
y_runtime = PredictorTRT(engine_path, device=device)
# Perform inference on an image file
predictions = y_runtime.predict("bus.jpg")
Besides, we provide a tutorial detailing yolort's model conversion to TensorRT and the use of the Python interface. Please check this example if you want to use the C++ interface.
Now, yolort
can draw the model graph directly, checkout our tutorial to see how to use and visualize the model graph.
We love your input! Please see our Contributing Guide to get started and for how to help out. Thank you to all our contributors! If you like this project please consider ⭐ this repo, as it is the simplest way to support us.
If you use yolort in your publication, please cite it by using the following BibTeX entry.
@Misc{yolort2021,
author = {Zhiqiang Wang and Song Lin and Shiquan Yu and Wei Zeng and Fidan Kharrasov},
title = {YOLORT: A runtime stack for object detection on specialized accelerators},
howpublished = {\url{https://github.com/zhiqwang/yolort}},
year = {2021}
}
yolov5
borrow the code from ultralytics.