Junfeng Wu*, Yi Jiang*, Qihao Liu, Zehuan Yuan, Xiang Bai†,and Song Bai†
* Equal Contribution, †Correspondence
[Project Page] [Paper] [HuggingFace Demo] [Video Demo]
We will release the following contents for GLEE:exclamation:
[x] Demo Code
[x] Model Zoo
[x] Comprehensive User Guide
[x] Training Code and Scripts
[ ] Detailed Evaluation Code and Scripts
[ ] Tutorial for Zero-shot Testing or Fine-tuning GLEE on New Datasets
Try our online demo app on [HuggingFace Demo] or use it locally:
git clone https://github.com/FoundationVision/GLEE
# support CPU and GPU running
python app.py
GLEE has been trained on over ten million images from 16 datasets, fully harnessing both existing annotated data and cost-effective automatically labeled data to construct a diverse training set. This extensive training regime endows GLEE with formidable generalization capabilities.
GLEE consists of an image encoder, a text encoder, a visual prompter, and an object decoder, as illustrated in Figure. The text encoder processes arbitrary descriptions related to the task, including 1) object category list 2)object names in any form 3)captions about objects 4)referring expressions. The visual prompter encodes user inputs such as 1) points 2) bounding boxes 3) scribbles during interactive segmentation into corresponding visual representations of target objects. Then they are integrated into a detector for extracting objects from images according to textual and visual input.
Based on the above designs, GLEE can be used to seamlessly unify a wide range of object perception tasks in images and videos, including object detection, instance segmentation, grounding, multi-target tracking (MOT), video instance segmentation (VIS), video object segmentation (VOS), interactive segmentation and tracking, and supports open-world/large-vocabulary image and video detection and segmentation tasks.
`
@misc{wu2023GLEE,
author= {Junfeng Wu, Yi Jiang, Qihao Liu, Zehuan Yuan, Xiang Bai, Song Bai},
title = {General Object Foundation Model for Images and Videos at Scale},
year={2023},
eprint={2312.09158},
archivePrefix={arXiv}
}