UX-Decoder / Segment-Everything-Everywhere-All-At-Once

[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
Apache License 2.0
4.39k stars 402 forks source link

👀SEEM: Segment Everything Everywhere All at Once

:grapes: [Read our arXiv Paper]   :apple: [Try our Demo]

We introduce SEEM that can Segment Everything Everywhere with Multi-modal prompts all at once. SEEM allows users to easily segment an image using prompts of different types including visual prompts (points, marks, boxes, scribbles and image segments) and language prompts (text and audio), etc. It can also work with any combination of prompts or generalize to custom prompts!

by Xueyan Zou*, Jianwei Yang*, Hao Zhang*, Feng Li*, Linjie Li, Jianfeng Wang, Lijuan Wang, Jianfeng Gao^, Yong Jae Lee^, in NeurIPS 2023.

A brief introduction of all the generic and interactive segmentation tasks we can do!

SEEM design

:rocket: Updates

:bookmark_tabs: Catalog

We release the following contents for both SEEM and X-Decoder:exclamation:

:point_right: One-Line SEEM Demo with Linux:

git clone git@github.com:UX-Decoder/Segment-Everything-Everywhere-All-At-Once.git && sh assets/scripts/run_demo.sh

:round_pushpin: [New] Getting Started:

:round_pushpin: [New] Latest Checkpoints and Numbers: COCO Ref-COCOg VOC SBD
Method Checkpoint Backbone PQ ↑ mAP ↑ mIoU ↑ cIoU ↑ mIoU ↑ AP50 ↑ NoC85 ↓ NoC90 ↓ NoC85 ↓ NoC90 ↓
X-Decoder ckpt Focal-T 50.8 39.5 62.4 57.6 63.2 71.6 - - - -
X-Decoder-oq201 ckpt Focal-L 56.5 46.7 67.2 62.8 67.5 76.3 - - - -
SEEM_v0 ckpt Focal-T 50.6 39.4 60.9 58.5 63.5 71.6 3.54 4.59 * *
SEEM_v0 - Davit-d3 56.2 46.8 65.3 63.2 68.3 76.6 2.99 3.89 5.93 9.23
SEEM_v0 ckpt Focal-L 56.2 46.4 65.5 62.8 67.7 76.2 3.04 3.85 * *
SEEM_v1 ckpt SAM-ViT-B 52.0 43.5 60.2 54.1 62.2 69.3 2.53 3.23 * *
SEEM_v1 ckpt SAM-ViT-L 49.0 41.6 58.2 53.8 62.2 69.5 2.40 2.96 * *
SEEM_v1 ckpt/log Focal-T 50.8 39.4 60.7 58.5 63.7 72.0 3.19 4.13 * *
SEEM_v1 ckpt/log Focal-L 56.1 46.3 65.8 62.4 67.8 76.0 2.66 3.44 * *

SEEM_v0: Supporting Single Interactive object training and inference
SEEM_v1: Supporting Multiple Interactive objects training and inference

:fire: Related projects:

:fire: Other projects you may find interesting:

:bulb: Highlights

Inspired by the appealing universal interface in LLMs, we are advocating a universal, interactive multi-modal interface for any type of segmentation with ONE SINGLE MODEL. We emphasize 4 important features of SEEM below.

  1. Versatility: work with various types of prompts, for example, clicks, boxes, polygons, scribbles, texts, and referring image;
  2. Compositionaliy: deal with any compositions of prompts;
  3. Interactivity: interact with user in multi-rounds, thanks to the memory prompt of SEEM to store the session history;
  4. Semantic awareness: give a semantic label to any predicted mask;

:unicorn: How to use the demo

:volcano: An interesting example

An example of Transformers. The referred image is the truck form of Optimus Prime. Our model can always segment Optimus Prime in target images no matter which form it is in. Thanks Hongyang Li for this fun example.

assets/images/transformers_gh.png

:tulip: NERF Examples

:camping: Click, scribble to mask

With a simple click or stoke from the user, we can generate the masks and the corresponding category labels for it.

SEEM design

:mountain_snow: Text to mask

SEEM can generate the mask with text input from the user, providing multi-modality interaction with human.

example

:mosque: Referring image to mask

With a simple click or stroke on the referring image, the model is able to segment the objects with similar semantics on the target images. example

SEEM understands the spatial relationship very well. Look at the three zebras! The segmented zebras have similar positions with the referred zebras. For example, when the leftmost zebra is referred on the upper row, the leftmost zebra on the bottom row is segmented. example

:blossom: Referring image to video mask

No training on video data needed, SEEM works perfectly for you to segment videos with whatever queries you specify! example

:sunflower: Audio to mask

We use Whisper to turn audio into text prompt to segment the object. Try it in our demo!

assets/images/audio.png

:deciduous_tree: Examples of different styles

An example of segmenting a meme.

assets/images/emoj.png

An example of segmenting trees in cartoon style.

assets/images/trees_text.png

An example of segmenting a Minecraft image.

assets/images/minecraft.png

An example of using referring image on a popular teddy bear.

example

Model

SEEM design

Comparison with SAM

In the following figure, we compare the levels of interaction and semantics of three segmentation tasks (edge detection, open-set, and interactive segmentation). Open-set Segmentation usually requires a high level of semantics and does not require interaction. Compared with SAM, SEEM covers a wider range of interaction and semantics levels. For example, SAM only supports limited interaction types like points and boxes, while misses high-semantic tasks since it does not output semantic labels itself. The reasons are: First, SEEM has a unified prompt encoder that encodes all visual and language prompts into a joint representation space. In consequence, SEEM can support more general usages. It has potential to extend to custom prompts. Second, SEEM works very well on text to mask (grounding segmentation) and outputs semantic-aware predictions.

assets/images/compare.jpg

:cupid: Acknowledgements