Open i-am-invincible opened 3 months ago
I have the same question.
same problem
Same issue.
The online and library versions are probably using different hyperparamters. Checkout this notebook: https://github.com/facebookresearch/segment-anything/blob/main/notebooks/automatic_mask_generator_example.ipynb
Hi, I am working on segmenting car bodies in images using the Meta SAM model. I am facing a significant difference in performance between the UI demo on the official website and the code provided on the GitHub repository. UI demo performed remarkably well with just 1-2 clicks, however, when I attempted to use the code, results are very different and bad. Despite of providing multiple points, the results were not up to the mark as compared to the demo.
Using SAM Model Version:- "vit_h" Used predictor_example file:- notebooks/predictor_example.ipynb
Examples: Image 1: Original Image:
UI Demo Segmentation: - Performed well with 4 foreground points and 3 background points.
My Code Segmentation: - Poor results with the same point placement.
Image 2: Original Image:
UI Demo Segmentation: - Good results with 4 foreground points and 4 background points.
My Code Segmentation: - Poor results with the same point placement.
I would appreciate any insights into why this discrepancy is happening. Could it be related to hidden hyperparameter settings, optimizers, or learning rates used in the UI demo that aren't included in the GitHub code? If this is the case, would it be possible to provide some guidance.