roboflow / multimodal-maestro

streamline the fine-tuning process for multimodal models: PaliGemma, Florence-2, Phi-3.5 Vision
Apache License 2.0
1.21k stars 89 forks source link
captioning fine-tuning florence-2 multimodal objectdetection paligemma phi-3-vision transformers vision-and-language vqa

maestro

coming: when it's ready...

👋 hello

maestro is a tool designed to streamline and accelerate the fine-tuning process for multimodal models. It provides ready-to-use recipes for fine-tuning popular vision-language models (VLMs) such as Florence-2, PaliGemma, and Phi-3.5 Vision on downstream vision-language tasks.

💻 install

Pip install the supervision package in a Python>=3.8 environment.

pip install maestro

🔥 quickstart

CLI

VLMs can be fine-tuned on downstream tasks directly from the command line with maestro command:

maestro florence2 train --dataset='<DATASET_PATH>' --epochs=10 --batch-size=8

SDK

Alternatively, you can fine-tune VLMs using the Python SDK, which accepts the same arguments as the CLI example above:

from maestro.trainer.common import MeanAveragePrecisionMetric
from maestro.trainer.models.florence_2 import train, TrainingConfiguration

config = TrainingConfiguration(
    dataset='<DATASET_PATH>',
    epochs=10,
    batch_size=8,
    metrics=[MeanAveragePrecisionMetric()]
)

train(config)

📚 notebooks

Explore our collection of notebooks that demonstrate how to fine-tune various vision-language models using maestro. Each notebook provides step-by-step instructions and code examples to help you get started quickly.

model and task colab video
Fine-tune Florence-2 for object detection Open In Colab YouTube

🦸 contribution

We would love your help in making this repository even better! We are especially looking for contributors with experience in fine-tuning vision-language models (VLMs). If you notice any bugs or have suggestions for improvement, feel free to open an issue or submit a pull request.