Powered by large-scale pre-training, vision foundation models exhibit significant potential in open-world image understanding. However, unlike large language models that excel at directly tackling various language tasks, vision foundation models require a task-specific model structure followed by fine-tuning on specific tasks. In this work, we present Matcher, a novel perception paradigm that utilizes off-the-shelf vision foundation models to address various perception tasks. Matcher can segment anything by using an in-context example without training. Additionally, we design three effective components within the Matcher framework to collaborate with these foundation models and unleash their full potential in diverse perception tasks. Matcher demonstrates impressive generalization performance across various segmentation tasks, all without training. Our visualization results further showcase the open-world generality and flexibility of Matcher when applied to images in the wild.
See installation instructions.
See Preparing Datasets for Matcher.
See Getting Started with Matcher.
https://github.com/aim-uofa/Matcher/assets/119775808/9ff9502d-7d2a-43bc-a8ef-01235097d62b
For academic use, this project is licensed under the 2-clause BSD License. For commercial use, please contact Chunhua Shen.
If you find this project useful in your research, please consider to cite:
@article{liu2023matcher,
title={Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching},
author={Liu, Yang and Zhu, Muzhi and Li, Hengtao and Chen, Hao and Wang, Xinlong and Shen, Chunhua},
journal={arXiv preprint arXiv:2305.13310},
year={2023}
}
SAM, DINOv2, SegGPT, HSNet, Semantic-SAM and detectron2.