Models | Example scripts | Getting started | Code overview | Installation | Contributing | License
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale, including both content understanding and generative models. TorchMultimodal contains:
TorchMultimodal contains a number of models, including
In addition to the above models, we provide example scripts for training, fine-tuning, and evaluation of models on popular multimodal tasks. Examples can be found under examples/ and include
Model | Supported Tasks |
---|---|
ALBEF | Retrieval Visual Question Answering |
DDPM | Training and Inference (notebook) |
FLAVA | Pretraining Fine-tuning Zero-shot |
MDETR | Phrase grounding Visual Question Answering |
MUGEN | Text-to-video retrieval Text-to-video generation |
Omnivore | Pre-training Evaluation |
Below we give minimal examples of how you can write a simple training or zero-shot evaluation script using components from TorchMultimodal.
diffusion_labs contains components for building diffusion models. For more details on these components, see diffusion_labs/README.md.
Look here for model classes as well as any other modeling code specific to a given architecture. E.g. the directory torchmultimodal/models/blip2 contains modeling components specific to BLIP-2.
Look here for common generic building blocks that can be stitched together to build a new architecture. This includes layers like codebooks, patch embeddings, or transformer encoder/decoders, losses like contrastive loss with temperature or reconstruction loss, [encoders]() like ViT and BERT, and fusion modules like Deep Set fusion.
Look here for common data transforms from popular models, e.g. CLIP, FLAVA, and MAE.
TorchMultimodal requires Python >= 3.8. The library can be installed with or without CUDA support. The following assumes conda is installed.
Install conda environment
conda create -n torch-multimodal python=\<python_version\>
conda activate torch-multimodal
Install pytorch, torchvision, and torchaudio. See PyTorch documentation.
# Use the current CUDA version as seen [here](https://pytorch.org/get-started/locally/)
# Select the nightly Pytorch build, Linux as the OS, and conda. Pick the most recent CUDA version.
conda install pytorch torchvision torchaudio pytorch-cuda=\<cuda_version\> -c pytorch-nightly -c nvidia
# For CPU-only install
conda install pytorch torchvision torchaudio cpuonly -c pytorch-nightly
Nightly binary on Linux for Python 3.8 and 3.9 can be installed via pip wheels. For now we only support Linux platform through PyPI.
python -m pip install torchmultimodal-nightly
Alternatively, you can also build from our source code and run our examples:
git clone --recursive https://github.com/facebookresearch/multimodal.git multimodal
cd multimodal
pip install -e .
For developers please follow the development installation.
We welcome any feature requests, bug reports, or pull requests from the community. See the CONTRIBUTING file for how to help out.
TorchMultimodal is BSD licensed, as found in the LICENSE file.