caoyunkang / Segment-Any-Anomaly

Official implementation of "Segment Any Anomaly without Training via Hybrid Prompt Regularization (SAA+)".
744 stars 75 forks source link
anomaly-detection foundation zero-shot

Segment Any Anomaly

Open In Colab HuggingFace Space

This repository contains the official implementation of Segment Any Anomaly without Training via Hybrid Prompt Regularization, SAA+.

SAA+ aims to segment any anomaly without the need for training. We achieve this by adapting existing foundation models, namely Grounding DINO and Segment Anything, with hybrid prompt regularization.

:fire:What's New

:gem:Framework

We found that a simple assembly of foundation models suffers from severe language ambiguity. Therefore, we introduce hybrid prompts derived from domain expert knowledge and target image context to alleviate the language ambiguity. The framework is illustrated below:

Framework

Quick Start

:bank:Dataset Preparation

We evaluate SAA+ on four public datasets: MVTec-AD, VisA, KSDD2, and MTD. Additionally, SAA+ was a winning team in the VAND workshop, which offers a specified dataset, VisA-Challenge. To prepare the datasets, please follow the instructions below:

By default, we save the data in the ../datasets directory.

cd $ProjectRoot # e.g., /home/SAA
cd ..
mkdir datasets
cd datasets

Then, follow the corresponding instructions to prepare individual datasets:

:hammer:Environment Setup

You can use our script for one-click setup of the environment and downloading the checkpoints.

cd $ProjectRoot
bash install.sh

:page_facing_up:Repeat the public results

MVTec-AD

python run_MVTec.py

VisA-Public

python run_VisA_public.py

VisA-Challenge

python run_VAND_workshop.py

The submission files can be found in ./result_VAND_workshop/visa_challenge-k-0/0shot.

KSDD2

python run_KSDD2.py

MTD

python run_MTD.py

:page_facing_up:Demo Results

Run following command for demo results

python demo.py

Demo

:dart:Performance

Results Qualitative Results

:hammer: Todo List

We have planned the following features to be added in the near future:

💘 Acknowledgements

Our work is largely inspired by the following projects. Thanks for their admiring contribution.

Stargazers over time

Stargazers over time

Citation

If you find this project helpful for your research, please consider citing the following BibTeX entry.


@article{cao_segment_2023,
    title = {Segment Any Anomaly without Training via Hybrid Prompt Regularization},
    url = {http://arxiv.org/abs/2305.10724},
    number = {{arXiv}:2305.10724},
    publisher = {{arXiv}},
    author = {Cao, Yunkang and Xu, Xiaohao and Sun, Chen and Cheng, Yuqi and Du, Zongwei and Gao, Liang and Shen, Weiming},
    urldate = {2023-05-19},
    date = {2023-05-18},
    langid = {english},
    eprinttype = {arxiv},
    eprint = {2305.10724 [cs]},
    keywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence},
}

@article{kirillov2023segany,
  title={Segment Anything}, 
  author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
  journal={arXiv:2304.02643},
  year={2023}
}

@inproceedings{ShilongLiu2023GroundingDM,
  title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection},
  author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang},
  year={2023}
}