:grapes: [Read our arXiv Paper] :apple: [Project Page]
Jianwei Yang*β, Hao Zhang*, Feng Li*, Xueyan Zou*, Chunyuan Li, Jianfeng Gao
* Core Contributors β Project Lead
We present Set-of-Mark (SoM) prompting, simply overlaying a number of spatial and speakable marks on the images, to unleash the visual grounding abilities in the strongest LMM -- GPT-4V. Let's using visual prompting for vision!
https://github.com/microsoft/SoM/assets/3894247/8f827871-7ebd-4a5e-bef5-861516c4427b
[04/25] We release SoM-LLaVA, with a new dataset to empower open-source MLLMs with SoM Prompting. Check it out! SoM-LLaVA
[11/21] Thanks to Roboflow and @SkalskiP, a huggingface demo for SoM + GPT-4V is online! Try it out!
[11/07] We released the vision benchmark we used to evaluate GPT-4V with SoM prompting! Check out the benchmark page!
[11/07] Now that GPT-4V API has been released, we are releasing a demo integrating SoM into GPT-4V!
export OPENAI_API_KEY=YOUR_API_KEY
python demo_gpt4v_som.py
[10/23] We released the SoM toolbox code for generating set-of-mark prompts for GPT-4V. Try it out!
Fascinating applications of SoM in GPT-4V:
Our method compiles the following models to generate the set of marks:
We are standing on the shoulder of the giant GPT-4V (playground)!
# install SEEM
pip install git+https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once.git@package
# install SAM
pip install git+https://github.com/facebookresearch/segment-anything.git
# install Semantic-SAM
pip install git+https://github.com/UX-Decoder/Semantic-SAM.git@package
# install Deformable Convolution for Semantic-SAM
cd ops && bash make.sh && cd ..
# common error fix:
python -m pip install 'git+https://github.com/MaureenZOU/detectron2-xyz.git'
sh download_ckpt.sh
python demo_som.py
And you will see this interface:
To deploy SoM to EC2 on AWS via Github Actions:
deploy.py
.Users can select which granularity of masks to generate, and which mode to use between automatic (top) and interactive (bottom). A higher alpha blending value (0.4) is used for better visualization.
SoM enables interleaved prompts which include textual and visual content. The visual content can be represented using the region indices.
In comparison to GPT-4V without SoM, adding marks enables GPT-4V to ground the reasoning on detailed contents of the image (Left). Clear object cross-image references are observed on the right. 17
Case study on solving CAPTCHA. GPT-4V gives the wrong answer with a wrong number of squares while finding the correct squares with corresponding marks after SoM prompting.
Case study on an image of dish for GPT-4V. GPT-4V does not produce a grounded answer with the original image. Based on SoM prompting, GPT-4V not only speaks out the ingredients but also corresponds them to the regions.
SoM-pormpted GPT-4V gives very precise suggestions while the original one fails, even with hallucinated foods, e.g., soft drinks
Likewise, GPT4-V with SoM can help to provide thorough tool usage instruction , teaching users the function of each button on a controller. Note that this image is not fully labeled, while GPT-4V can also provide information about the non-labeled buttons.
GPT-4V with SoM gives a reasonable suggestion on how to achieve a goal in a gaming scenario.
We conduct experiments on various vision tasks to verify the effectiveness of our SoM. Results show that GPT4V+SoM outperforms specialists on most vision tasks and is comparable to MaskDINO on COCO panoptic segmentation.
If you find our work helpful for your research, please consider citing the following BibTeX entry.
@article{yang2023setofmark,
title={Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V},
author={Jianwei Yang and Hao Zhang and Feng Li and Xueyan Zou and Chunyuan Li and Jianfeng Gao},
journal={arXiv preprint arXiv:2310.11441},
year={2023},
}