CircleRadon / Osprey

[CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"
Apache License 2.0
744 stars 42 forks source link
mllm pixel-understanding sam visual-instruction-tuning

![Static Badge](https://img.shields.io/badge/Osprey-v1-F7C97E) [![arXiv preprint](https://img.shields.io/badge/arxiv-2312.10032-ECA8A7?logo=arxiv)](https://arxiv.org/pdf/2312.10032.pdf) [![Dataset](https://img.shields.io/badge/Dataset-Hugging_Face-CFAFD4)](https://huggingface.co/datasets/AntGroup-MI/Osprey-724K) [![video](https://img.shields.io/badge/Watch_Video-36600E?logo=youtube&logoColor=green)](https://youtu.be/YsxqHBBnDfk) [![Static Badge](https://img.shields.io/badge/Try_Demo-6B88E3?logo=youtubegaming&logoColor=DAE4EE)](http://111.0.123.204:8000/)
Demo username & password: osprey


A part of Along the River During the Qingming Festival (清明上河图)

Spirited Away (千与千寻)

Updates 📌

[2024/3/29]🔥 We released Osprey-Chat model, which exhibits better conversation and image-level understanding&reasoning capabilities.

[2024/2/27]🔥 Osprey has been accepted to CVPR2024!

[2024/1/15]🔥 We released the evaluation code.

[2023/12/29]🔥 We released the training code and Osprey-724K dataset.

[2023/12/18]🔥 We released the code, osprey-7b model and online demo for Osprey.

What is Osprey 👀

Osprey is a mask-text instruction tuning approach that extends MLLMs by incorporating pixel-wise mask regions into language instructions, enabling fine-grained visual understanding. Based on input mask region, Osprey generate the semantic descriptions including short description and detailed description.

Our Osprey can seamlessly integrate with SAM in point-prompt, box-prompt and segmentation everything modes to generate the semantics associated with specific parts or objects.

Watch Video Demo 🎥

Try Our Demo 🕹️

Online demo

Click 👇 to try our demo online.

web demo

username: osprey
password: osprey

Point

Box

Everything

Offline demo

💻 requirments: For this demo, it needs about 17GB GPU memory for Osprey(15GB) and SAM(2GB).

  1. First install Gradio-Osprey-Demo.

  2. Install Segment Anything.

    pip install git+https://github.com/facebookresearch/segment-anything.git
  3. Download all the checkpoints:

The default path of all the checkpoints:

├── demo
    ├── checkpoints
    │   ├── Osprey_7b
    │   └── sam_vit_b_01ec64.pth 
    └── open_clip_pytorch_model.bin

Or change the "mm_vision_tower" in config.json of Osprey-7b model to the Absolute Path of open_clip_pytorch_model.bin.

  1. Run app.py.
    cd demo
    python app.py --model checkpoints/Osprey_7b

Install 🛠️

  1. Clone this repository and navigate to Osprey folder
    git clone https://github.com/CircleRadon/Osprey.git
    cd Osprey
  2. Install packages
    conda create -n osprey python=3.10 -y
    conda activate osprey
    pip install --upgrade pip  # enable PEP 660 support
    pip install -e .
  3. Install additional packages for training cases
    pip install -e ".[train]"
    pip install flash-attn --no-build-isolation

Dataset 🌟

The all datasets for training can be found in Dataset preparation.

Osprey-724K: 🤗Hugging Face

Osprey-724K is an instruction dataset with mask-text pairs, containing around 724K GPT-generated multimodal dialogues to encourage MLLMs for fine-grained pixel-level image understanding. It contains object-level, part-level and additional instruction samples for robustness and flexibility.

Training 🚀

Checkpoints 🤖

Osprey-7b model🤗: model

We also provide the checkpoint of intermediate stage2, please check model.

Evaluation 🔎

See evaluation for details.

TODO List 📝

Acknowledgement 💌

BibTeX 🖊️

@misc{Osprey,
  title={Osprey: Pixel Understanding with Visual Instruction Tuning},
  author={Yuqian Yuan, Wentong Li, Jian Liu, Dongqi Tang, Xinjie Luo, Chi Qin, Lei Zhang and Jianke Zhu},
  year={2023},
  eprint={2312.10032},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}