apple / ml-cvnets

CVNets: A library for training computer vision networks
https://apple.github.io/ml-cvnets
Other
1.8k stars 232 forks source link
ade20k classification computer-vision deep-learning detection imagenet machine-learning mscoco pascal-voc pytorch segmentation

CVNets: A library for training computer vision networks

CVNets is a computer vision toolkit that allows researchers and engineers to train standard and novel mobile- and non-mobile computer vision models for variety of tasks, including object classification, object detection, semantic segmentation, and foundation models (e.g., CLIP).

Table of contents

What's new?

Installation

We recommend to use Python 3.10+ and PyTorch (version >= v1.12.0)

Instructions below use Conda, if you don't have Conda installed, you can check out How to Install Conda.

# Clone the repo
git clone git@github.com:apple/ml-cvnets.git
cd ml-cvnets

# Create a virtual env. We use Conda
conda create -n cvnets python=3.10.8
conda activate cvnets

# install requirements and CVNets package
pip install -r requirements.txt -c constraints.txt
pip install --editable .

Getting started

Supported models and Tasks

To see a list of available models and benchmarks, please refer to Model Zoo and examples folder.

ImageNet classification models * CNNs * [MobileNetv1](https://arxiv.org/abs/1704.04861) * [MobileNetv2](https://arxiv.org/abs/1801.04381) * [MobileNetv3](https://arxiv.org/abs/1905.02244) * [EfficientNet](https://arxiv.org/abs/1905.11946) * [ResNet](https://arxiv.org/abs/1512.03385) * [RegNet](https://arxiv.org/abs/2003.13678) * Transformers * [Vision Transformer](https://arxiv.org/abs/2010.11929) * [MobileViTv1](https://arxiv.org/abs/2110.02178) * [MobileViTv2](https://arxiv.org/abs/2206.02680) * [SwinTransformer](https://arxiv.org/abs/2103.14030)
Multimodal Classification * [ByteFormer](https://arxiv.org/abs/2306.00238)
Object detection * [SSD](https://arxiv.org/abs/1512.02325) * [Mask R-CNN](https://arxiv.org/abs/1703.06870)
Semantic segmentation * [DeepLabv3](https://arxiv.org/abs/1706.05587) * [PSPNet](https://arxiv.org/abs/1612.01105)
Foundation models * [CLIP](https://arxiv.org/abs/2103.00020)
Automatic Data Augmentation * [RangeAugment](https://arxiv.org/abs/2212.10553) * [AutoAugment](https://arxiv.org/abs/1805.09501) * [RandAugment](https://arxiv.org/abs/1909.13719)
Distillation * Soft distillation * Hard distillation

Maintainers

This code is developed by Sachin, and is now maintained by Sachin, Maxwell Horton, Mohammad Sekhavat, and Yanzi Jin.

Previous Maintainers

Research effort at Apple using CVNets

Below is the list of publications from Apple that uses CVNets:

Contributing to CVNets

We welcome PRs from the community! You can find information about contributing to CVNets in our contributing document.

Please remember to follow our Code of Conduct.

License

For license details, see LICENSE.

Citation

If you find our work useful, please cite the following paper:

@inproceedings{mehta2022mobilevit,
     title={MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
     author={Sachin Mehta and Mohammad Rastegari},
     booktitle={International Conference on Learning Representations},
     year={2022}
}

@inproceedings{mehta2022cvnets, 
     author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, 
     title = {CVNets: High Performance Library for Computer Vision}, 
     year = {2022}, 
     booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, 
     series = {MM '22} 
}