paperswithcode / torchbench

Easily benchmark machine learning models in PyTorch
Apache License 2.0
147 stars 20 forks source link


PyPI version Docs

torchbench is a library that contains a collection of deep learning benchmarks you can use to benchmark your models, optimized for the PyTorch framework. It can be used in conjunction with the sotabench service to record results for models, so the community can compare model performance on different tasks, as well as a continuous integration style service for your repository to benchmark your models on each commit.

Benchmarks Supported

PRs welcome for further benchmarks!

Installation

Requires Python 3.6+.

pip install torchbench

Get Benching! 🏋️

You should read the full documentation here, which contains guidance on getting started and connecting to sotabench.

The API is optimized for PyTorch implementations. For example, if you wanted to benchmark a torchvision model for ImageNet, you would write a sotabench.py file like this:

from torchbench.image_classification import ImageNet
from torchvision.models.resnet import resnext101_32x8d
import torchvision.transforms as transforms
import PIL

# Define the transforms need to convert ImageNet data to expected model input
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], 
    std=[0.229, 0.224, 0.225])
input_transform = transforms.Compose([
    transforms.Resize(256, PIL.Image.BICUBIC),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    normalize,
])

# Run the benchmark
ImageNet.benchmark(
    model=resnext101_32x8d(pretrained=True),
    paper_model_name='ResNeXt-101-32x8d',
    paper_arxiv_id='1611.05431',
    input_transform=input_transform,
    batch_size=256,
    num_gpu=1
)

Sotabench will run this on each commit and record the results. For other tasks, such as object detection and semantic segmentation, implementations are much less standardized than for image classification. It is therefore recommended you use sotabencheval for these tasks - although there are experimental benchmarks for COCO and PASCAL VOC.

Contributing

All contributions welcome!