Deci-AI / super-gradients

Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
https://www.supergradients.com
Apache License 2.0
4.6k stars 510 forks source link
dependency-graph


**Build, train, and fine-tune production-ready deep learning SOTA vision models** [![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=Easily%20train%20or%20fine-tune%20SOTA%20computer%20vision%20models%20from%20one%20training%20repository&url=https://github.com/Deci-AI/super-gradients&via=deci_ai&hashtags=AI,deeplearning,computervision,training,opensource) #### Version 3.5 is out! Notebooks have been updated! ______________________________________________________________________

Getting StartedPretrained ModelsCommunityLicense


Build with SuperGradients


Support various computer vision tasks

Ready to deploy pre-trained SOTA models

YOLO-NAS and YOLO-NAS-POSE architectures are out! The new YOLO-NAS delivers state-of-the-art performance with the unparalleled accuracy-speed performance, outperforming other models such as YOLOv5, YOLOv6, YOLOv7 and YOLOv8. A YOLO-NAS-POSE model for pose estimation is also available, delivering state-of-the-art accuracy/performance tradeoff.

Check these out here: YOLO-NAS & YOLO-NAS-POSE.

# Load model with pretrained weights
from super_gradients.training import models
from super_gradients.common.object_names import Models

model = models.get(Models.YOLO_NAS_M, pretrained_weights="coco")

All Computer Vision Models - Pretrained Checkpoints can be found in the Model Zoo

Classification

Semantic Segmentation

Object Detection

Pose Estimation

Easy to train SOTA Models

Easily load and fine-tune production-ready, pre-trained SOTA models that incorporate best practices and validated hyper-parameters for achieving best-in-class accuracy. For more information on how to do it go to Getting Started

Plug and play recipes

python -m super_gradients.train_from_recipe architecture=regnetY800 dataset_interface.data_dir=<YOUR_Imagenet_LOCAL_PATH> ckpt_root_dir=<CHEKPOINT_DIRECTORY>

More examples on how and why to use recipes can be found in Recipes

Production readiness

All SuperGradients models’ are production ready in the sense that they are compatible with deployment tools such as TensorRT (Nvidia) and OpenVINO (Intel) and can be easily taken into production. With a few lines of code you can easily integrate the models into your codebase.

# Load model with pretrained weights
from super_gradients.training import models
from super_gradients.common.object_names import Models

model = models.get(Models.YOLO_NAS_M, pretrained_weights="coco")

# Prepare model for conversion
# Input size is in format of [Batch x Channels x Width x Height] where 640 is the standard COCO dataset dimensions
model.eval()
model.prep_model_for_conversion(input_size=[1, 3, 640, 640])

# Create dummy_input

# Convert model to onnx
torch.onnx.export(model, dummy_input,  "yolo_nas_m.onnx")

More information on how to take your model to production can be found in Getting Started notebooks

Quick Installation


pip install super-gradients

What's New


Version 3.4.0 (November 6, 2023)


Version 3.1.3 (July 19, 2023)


30th of May

Version 3.1.1 (May 3rd)

Check out SG full release notes.

Table of Content


Getting Started


Start Training with Just 1 Command Line

The most simple and straightforward way to start training SOTA performance models with SuperGradients reproducible recipes. Just define your dataset path and where you want your checkpoints to be saved and you are good to go from your terminal!

Just make sure that you setup your dataset according to the data dir specified in the recipe.

python -m super_gradients.train_from_recipe --config-name=imagenet_regnetY architecture=regnetY800 dataset_interface.data_dir=<YOUR_Imagenet_LOCAL_PATH> ckpt_root_dir=<CHEKPOINT_DIRECTORY>

Quickly Load Pre-Trained Weights for Your Desired Model with SOTA Performance

Want to try our pre-trained models on your machine? Import SuperGradients, initialize your Trainer, and load your desired architecture and pre-trained weights from our SOTA model zoo

# The pretrained_weights argument will load a pre-trained architecture on the provided dataset

import super_gradients

model = models.get("model-name", pretrained_weights="pretrained-model-name")

Classification

Semantic Segmentation

Pose Estimation

Object Detection

How to Predict Using Pre-trained Model

Albumentations Integration

Advanced Features


Post Training Quantization and Quantization Aware Training

Quantization involves representing weights and biases in lower precision, resulting in reduced memory and computational requirements, making it useful for deploying models on devices with limited resources. The process can be done during training, called Quantization aware training, or after training, called post-training quantization. A full tutorial can be found here.

Quantization Aware Training YoloNAS on Custom Dataset

This tutorial provides a comprehensive guide on how to fine-tune a YoloNAS model using a custom dataset. It also demonstrates how to utilize SG's QAT (Quantization-Aware Training) support. Additionally, it offers step-by-step instructions on deploying the model and performing benchmarking.

Knowledge Distillation Training

Knowledge Distillation is a training technique that uses a large model, teacher model, to improve the performance of a smaller model, the student model. Learn more about SuperGradients knowledge distillation training with our pre-trained BEiT base teacher model and Resnet18 student model on CIFAR10 example notebook on Google Colab for an easy to use tutorial using free GPU hardware

Recipes

To train a model, it is necessary to configure 4 main components. These components are aggregated into a single "main" recipe .yaml file that inherits the aforementioned dataset, architecture, raining and checkpoint params. It is also possible (and recommended for flexibility) to override default settings with custom ones. All recipes can be found here
Recipes support out of the box every model, metric or loss that is implemented in SuperGradients, but you can easily extend this to any custom object that you need by "registering it". Check out this tutorial for more information.

Using Distributed Data Parallel (DDP)

#### Why use DDP ? Recent Deep Learning models are growing larger and larger to an extent that training on a single GPU can take weeks. In order to train models in a timely fashion, it is necessary to train them with multiple GPUs. Using 100s GPUs can reduce training time of a model from a week to less than an hour. #### How does it work ? Each GPU has its own process, which controls a copy of the model and which loads its own mini-batch from disk and sends it to its GPU during training. After the forward pass is completed on every GPU, the gradient is reduced across all GPUs, yielding to all the GPUs having the same gradient locally. This leads to the model weights to stay synchronized across all GPUs after the backward pass. #### How to use it ? You can use SuperGradients to train your model with DDP in just a few lines. *main.py* ```python from super_gradients import init_trainer, Trainer from super_gradients.common import MultiGPUMode from super_gradients.training.utils.distributed_training_utils import setup_device # Initialize the environment init_trainer() # Launch DDP on 4 GPUs' setup_device(multi_gpu=MultiGPUMode.DISTRIBUTED_DATA_PARALLEL, num_gpus=4) # Call the trainer Trainer(expriment_name=...) # Everything you do below will run on 4 gpus ... Trainer.train(...) ``` Finally, you can launch your distributed training with a simple python call. ```bash python main.py ``` Please note that if you work with `torch<1.9.0` (deprecated), you will have to launch your training with either `torch.distributed.launch` or `torchrun`, in which case `nproc_per_node` will overwrite the value set with `gpu_mode`: ```bash python -m torch.distributed.launch --nproc_per_node=4 main.py ``` ```bash torchrun --nproc_per_node=4 main.py ``` #### Calling functions on a single node It is often in DDP training that we want to execute code on the master rank (i.e rank 0). In SG, users usually execute their own code by triggering "Phase Callbacks" (see "Using phase callbacks" section below). One can make sure the desired code will only be ran on rank 0, using ddp_silent_mode or the multi_process_safe decorator. For example, consider the simple phase callback below, that uploads the first 3 images of every batch during training to the Tensorboard: ```python from super_gradients.training.utils.callbacks import PhaseCallback, PhaseContext, Phase from super_gradients.common.environment.env_helpers import multi_process_safe class Upload3TrainImagesCalbback(PhaseCallback): def __init__( self, ): super().__init__(phase=Phase.TRAIN_BATCH_END) @multi_process_safe def __call__(self, context: PhaseContext): batch_imgs = context.inputs.cpu().detach().numpy() tag = "batch_" + str(context.batch_idx) + "_images" context.sg_logger.add_images(tag=tag, images=batch_imgs[: 3], global_step=context.epoch) ``` The @multi_process_safe decorator ensures that the callback will only be triggered by rank 0. Alternatively, this can also be done by the SG trainer boolean attribute (which the phase context has access to), ddp_silent_mode, which is set to False iff the current process rank is zero (even after the process group has been killed): ```python from super_gradients.training.utils.callbacks import PhaseCallback, PhaseContext, Phase class Upload3TrainImagesCalbback(PhaseCallback): def __init__( self, ): super().__init__(phase=Phase.TRAIN_BATCH_END) def __call__(self, context: PhaseContext): if not context.ddp_silent_mode: batch_imgs = context.inputs.cpu().detach().numpy() tag = "batch_" + str(context.batch_idx) + "_images" context.sg_logger.add_images(tag=tag, images=batch_imgs[: 3], global_step=context.epoch) ``` Note that ddp_silent_mode can be accessed through SgTrainer.ddp_silent_mode. Hence, it can be used in scripts after calling SgTrainer.train() when some part of it should be ran on rank 0 only. #### Good to know Your total batch size will be (number of gpus x batch size), so you might want to increase your learning rate. There is no clear rule, but a rule of thumb seems to be to [linearly increase the learning rate with the number of gpus](https://arxiv.org/pdf/1706.02677.pdf)

Easily change architectures parameters

```python from super_gradients.training import models # instantiate default pretrained resnet18 default_resnet18 = models.get(model_name="resnet18", num_classes=100, pretrained_weights="imagenet") # instantiate pretrained resnet18, turning DropPath on with probability 0.5 droppath_resnet18 = models.get(model_name="resnet18", arch_params={"droppath_prob": 0.5}, num_classes=100, pretrained_weights="imagenet") # instantiate pretrained resnet18, without classifier head. Output will be from the last stage before global pooling backbone_resnet18 = models.get(model_name="resnet18", arch_params={"backbone_mode": True}, pretrained_weights="imagenet") ```

Using phase callbacks

```python from super_gradients import Trainer from torch.optim.lr_scheduler import ReduceLROnPlateau from super_gradients.training.utils.callbacks import Phase, LRSchedulerCallback from super_gradients.training.metrics.classification_metrics import Accuracy # define PyTorch train and validation loaders and optimizer # define what to be called in the callback rop_lr_scheduler = ReduceLROnPlateau(optimizer, mode="max", patience=10, verbose=True) # define phase callbacks, they will fire as defined in Phase phase_callbacks = [LRSchedulerCallback(scheduler=rop_lr_scheduler, phase=Phase.VALIDATION_EPOCH_END, metric_name="Accuracy")] # create a trainer object, look the declaration for more parameters trainer = Trainer("experiment_name") # define phase_callbacks as part of the training parameters train_params = {"phase_callbacks": phase_callbacks} ```

Integration to DagsHub

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/11fW56pMpwOMHQSbQW6xxMRYvw1mEC-t-?usp=sharing) ```python from super_gradients import Trainer trainer = Trainer("experiment_name") model = ... training_params = { ... # Your training params "sg_logger": "dagshub_sg_logger", # DagsHub Logger, see class super_gradients.common.sg_loggers.dagshub_sg_logger.DagsHubSGLogger for details "sg_logger_params": # Params that will be passes to __init__ of the logger super_gradients.common.sg_loggers.dagshub_sg_logger.DagsHubSGLogger { "dagshub_repository": "/", # Optional: Your DagsHub project name, consisting of the owner name, followed by '/', and the repo name. If this is left empty, you'll be prompted in your run to fill it in manually. "log_mlflow_only": False, # Optional: Change to true to bypass logging to DVC, and log all artifacts only to MLflow "save_checkpoints_remote": True, "save_tensorboard_remote": True, "save_logs_remote": True, } } ```

Integration to Weights and Biases

```python from super_gradients import Trainer # create a trainer object, look the declaration for more parameters trainer = Trainer("experiment_name") train_params = { ... # training parameters "sg_logger": "wandb_sg_logger", # Weights&Biases Logger, see class WandBSGLogger for details "sg_logger_params": # paramenters that will be passes to __init__ of the logger { "project_name": "project_name", # W&B project name "save_checkpoints_remote": True "save_tensorboard_remote": True "save_logs_remote": True } } ```

Integration to ClearML

```python from super_gradients import Trainer # create a trainer object, look the declaration for more parameters trainer = Trainer("experiment_name") train_params = { ... # training parameters "sg_logger": "clearml_sg_logger", # ClearML Logger, see class ClearMLSGLogger for details "sg_logger_params": # paramenters that will be passes to __init__ of the logger { "project_name": "project_name", # ClearML project name "save_checkpoints_remote": True, "save_tensorboard_remote": True, "save_logs_remote": True, } } ```

Integration to Voxel51

You can apply SuperGradients YOLO-NAS models directly to your FiftyOne dataset using the apply_model() method: ```python import fiftyone as fo import fiftyone.zoo as foz from super_gradients.training import models dataset = foz.load_zoo_dataset("quickstart", max_samples=25) dataset.select_fields().keep_fields() model = models.get("yolo_nas_m", pretrained_weights="coco") dataset.apply_model(model, label_field="yolo_nas", confidence_thresh=0.7) session = fo.launch_app(dataset) ``` The SuperGradients YOLO-NAS model can be accessed directly from the FiftyOne Model Zoo: ```python import fiftyone as fo import fiftyone.zoo as foz model = foz.load_zoo_model("yolo-nas-torch") dataset = foz.load_zoo_dataset("quickstart") dataset.apply_model(model, label_field="yolo_nas") session = fo.launch_app(dataset) ```

Installation Methods


Prerequisites

General requirements - Python 3.7, 3.8 or 3.9 installed. - 1.9.0 <= torch < 1.14 - https://pytorch.org/get-started/locally/ - The python packages that are specified in requirements.txt;
To train on nvidia GPUs - [Nvidia CUDA Toolkit >= 11.2](https://developer.nvidia.com/cuda-11.2.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu) - CuDNN >= 8.1.x - Nvidia Driver with CUDA >= 11.2 support (≥460.x)

Quick Installation

Install stable version using PyPi See in [PyPi](https://pypi.org/project/super-gradients/) ```bash pip install super-gradients ``` That's it !
Install using GitHub ```bash pip install git+https://github.com/Deci-AI/super-gradients.git@stable ```

Implemented Model Architectures


All Computer Vision Models - Pretrained Checkpoints can be found in the Model Zoo

Image Classification

Semantic Segmentation

Object Detection

Pose Estimation


Implemented Datasets


Deci provides implementation for various datasets. If you need to download any of the dataset, you can find instructions.

Image Classification

Semantic Segmentation

Object Detection

Pose Estimation


Documentation

Check SuperGradients Docs for full documentation, user guide, and examples.

Contributing

To learn about making a contribution to SuperGradients, please see our Contribution page.

Our awesome contributors:


Made with contrib.rocks.

Citation

If you are using SuperGradients library or benchmarks in your research, please cite SuperGradients deep learning training library.

Community

If you want to be a part of SuperGradients growing community, hear about all the exciting news and updates, need help, request for advanced features, or want to file a bug or issue report, we would love to welcome you aboard!

License

This project is released under the Apache 2.0 license.

Citing

BibTeX


@misc{supergradients,
  doi = {10.5281/ZENODO.7789328},
  url = {https://zenodo.org/record/7789328},
  author = {Aharon,  Shay and {Louis-Dupont} and {Ofri Masad} and Yurkova,  Kate and {Lotem Fridman} and {Lkdci} and Khvedchenya,  Eugene and Rubin,  Ran and Bagrov,  Natan and Tymchenko,  Borys and Keren,  Tomer and Zhilko,  Alexander and {Eran-Deci}},
  title = {Super-Gradients},
  publisher = {GitHub},
  journal = {GitHub repository},
  year = {2021},
}

Latest DOI

DOI


ֿ

Request free trial here