[2024/3/29]🔥 We released Osprey-Chat model, which exhibits better conversation and image-level understanding&reasoning capabilities.
[2024/2/27]🔥 Osprey has been accepted to CVPR2024!
[2024/1/15]🔥 We released the evaluation code.
[2023/12/29]🔥 We released the training code and Osprey-724K dataset.
[2023/12/18]🔥 We released the code, osprey-7b model and online demo for Osprey.
Osprey is a mask-text instruction tuning approach that extends MLLMs by incorporating pixel-wise mask regions into language instructions, enabling fine-grained visual understanding. Based on input mask region, Osprey generate the semantic descriptions including short description and detailed description.
Our Osprey can seamlessly integrate with SAM in point-prompt, box-prompt and segmentation everything modes to generate the semantics associated with specific parts or objects.
Click 👇 to try our demo online.
username: osprey
password: osprey
Point |
|
Box |
|
Everything |
💻 requirments: For this demo, it needs about 17GB
GPU memory for Osprey(15GB) and SAM(2GB).
First install Gradio-Osprey-Demo.
Install Segment Anything.
pip install git+https://github.com/facebookresearch/segment-anything.git
Download all the checkpoints:
The default path of all the checkpoints:
├── demo
├── checkpoints
│ ├── Osprey_7b
│ └── sam_vit_b_01ec64.pth
└── open_clip_pytorch_model.bin
Or change the "mm_vision_tower" in config.json
of Osprey-7b model to the Absolute Path of open_clip_pytorch_model.bin
.
app.py
.
cd demo
python app.py --model checkpoints/Osprey_7b
git clone https://github.com/CircleRadon/Osprey.git
cd Osprey
conda create -n osprey python=3.10 -y
conda activate osprey
pip install --upgrade pip # enable PEP 660 support
pip install -e .
pip install -e ".[train]"
pip install flash-attn --no-build-isolation
The all datasets for training can be found in Dataset preparation.
Osprey-724K: 🤗Hugging Face
Osprey-724K
is an instruction dataset with mask-text pairs, containing around 724K GPT-generated multimodal dialogues to encourage MLLMs for fine-grained pixel-level image understanding. It contains object-level, part-level and additional instruction samples for robustness and flexibility.
Stage1: Image-Text Alignment Pre-training
Stage2: Mask-Text Alignment Pre-training
model_name_or_path
in stage2.sh
to the path of vicuna-7b-v1.5
.pretrain_mm_mlp_adapter
in stage2.sh
to the path of mm_projector
.vision_tower
in stage2.sh
to the path of Convnext-large-CLIP-model.sh scripts/stage2.sh
.Stage3: End-to-End Fine-tuning
model_name_or_path
in stage2.sh
to the path of stage2 checkpoint
.vision_tower
in stage2.sh
to the path of Convnext-large-CLIP-model.sh scripts/stage3.sh
.Osprey-7b model🤗: model
We also provide the checkpoint of intermediate stage2, please check model.
See evaluation for details.
@misc{Osprey,
title={Osprey: Pixel Understanding with Visual Instruction Tuning},
author={Yuqian Yuan, Wentong Li, Jian Liu, Dongqi Tang, Xinjie Luo, Chi Qin, Lei Zhang and Jianke Zhu},
year={2023},
eprint={2312.10032},
archivePrefix={arXiv},
primaryClass={cs.CV}
}