caoyunkang / Segment-Any-Anomaly

Official implementation of "Segment Any Anomaly without Training via Hybrid Prompt Regularization (SAA+)".
704 stars 76 forks source link

Repeat the results on public datasets like MVTec, VisA etc. #10

Closed TengliEd closed 10 months ago

TengliEd commented 1 year ago

Hi, @caoyunkang I am very interested in repeating SOTA in benchmark datasets. Can you release the code on prompt engineering and post-processing described in your paper?

Cheers Teng

caoyunkang commented 1 year ago

Hi, @caoyunkang I am very interested in repeating SOTA in benchmark datasets. Can you release the code on prompt engineering and post-processing described in your paper?

Cheers Teng

Hi, thanks for your interest. This repo only contains SAA at this moment. We are cleaning our code for SAA+ and will release it in the near future😁

Yunkang

tzjtatata commented 1 year ago

I found that the results can not be re-produced without prompts and "background" text. In most test image on VisA or mvtec-AD, the simple prompt like 'defect' or 'anomaly' and even more accurate prompt like 'overlong wick' or 'wick'(as in the paper, figure 1) are useless for SAM to detect or segment the defects. In fact, the results of current code is poor than original SAM with controllable bounding boxes and points.

caoyunkang commented 1 year ago

I found that the results can not be re-produced without prompts and "background" text. In most test image on VisA or mvtec-AD, the simple prompt like 'defect' or 'anomaly' and even more accurate prompt like 'overlong wick' or 'wick'(as in the paper, figure 1) are useless for SAM to detect or segment the defects. In fact, the results of current code is poor than original SAM with controllable bounding boxes and points.

actually, SAM with manual controllable prompts would produce reasonable results but require large human efforts. In contrast, SAA/SAA+ could generate segmentation results automatically. We will release the revamped SAA+ for evaluation soon. Best.

caoyunkang commented 1 year ago

Hi, @TengliEd . Please refer to the new branch SAA-plus for the updated version. Best. :)

tzjtatata commented 1 year ago

It is still hard to repeat the performance in the paper. 1686126134150 I try some samples in MVTec with their anomaly type as the description and their dataset name as object name. Further, I provide object numbers and mask number with human efforts but still fails to recognize the anomaly.

caoyunkang commented 1 year ago

Hello, @tzjtatata

I recommend consulting the Colab notebook available at this link: Colab Link to replicate the results outlined in the report. It's important to note that the HuggingFace demo might yield variations in results due to factors such as differences in input image resolution and other finer details.

Should you require any further assistance or clarification, please don't hesitate to reach out.