-
The computer I am using has a 4060 graphics card and its got 8G of RAM, I ran ‘python grounded_sam2_tracking_demo_with_continuous_id_gd1.5.py’ and the number of images processed is 192 and I found tha…
-
I see a baseline cascade connection of gDINO and SAM.
But the project is lack of internal logical reasoning to combine them.
I will do a series of test on another similar project, and open-source …
-
Hi,
When I ran the code python grounded_sam2_local_demo.py
the result was good with a prompt text="car. road."
![grounded_sam2_annotated_image_with_mask](https://github.com/user-attachments/assets/…
-
-
If I want to know the categories of segmentation (semantic, instance, part), can this code do it? If so, what should I do to achieve this goal?
-
Hi, I am trying to evaluate Grounded SAM on COCO instance segmentation dataset. For that, I am giving a text prompt which is a sentence with all coco classes separated by a comma, and the Grounding di…
-
### Search before asking
- [X] I have searched the Autodistill [issues](https://github.com/autodistill/autodistill/issues) and found no similar feature requests.
### Question
Hi,
Whether Au…
-
I use a 294x78 PNG to test grounded_sam2_hf_model_demo.py, but I get following errors, any solution?
```
File "D:\project\Grounded-SAM-2\sam2\sam2_image_predictor.py", line 417, in _predict
l…
-
Thank you for your fantastic work and effort to evaluate zero-shot open vocabulary segmentation models properly.
I am curious about your approach to handling predictions related to the background c…
-
Hi. Thank you for your nice work.
Could you share the checkpoint and ply files for the trained models used in the paper?
I think it would help a lot.
Thanks.