-
!python grounded_sam_demo.py \
--config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
--grounded_checkpoint groundingdino_swint_ogc.pth \
--sam_checkpoint sam_vit_h_4b8939.p…
-
**Description:**
I'm interested in using SAM-2 to automatically process a large number of videos to obtain segmentation results. Given the scale of the dataset, manually clicking to segment each vi…
-
I receive the following error.
base_model = GroundedEdgeSAM(
... ontology=CaptionOntology(
... {
... "person": "person",
... "forklift": "forklift",
... …
-
Recommend updating or adding a note in the README.
external_mask_extractor.py line
`sam_path = '/home/artur.shagidanov/text-guided-image-editing/Grounded-Segment-Anything/sam_vit_h_4b8939.pth'
`
-
Hello Yunyang @yformer ! Thanks for your nice work! We've alrealy support [grounded-efficient-sam demo](https://github.com/IDEA-Research/Grounded-Segment-Anything/blob/main/EfficientSAM/grounded_effic…
-
python grounded_sam_simple_demo.py
File "/kaggle/working/Grounded-Segment-Anything/grounded_sam_simple_demo.py", line 51, in
labels = [
File "/kaggle/working/Grounded-Segment-Anything/grou…
-
I wonder can we tracking object by Grounded Sam2 in 3D?
-
When I call on GroundedSAM2 with the ontology, it reinstalls SAM2 every time I run the program. Is there some sort of flag I can set that would stop that behavior? It will reinstall SAM2 with the same…
-
In my image, there's a mobile, but when I use 'predictor.predict_torch', I only get a mask with background but no mobile.
below is the codes:
`
> sam = sam_model_registry["vit_l"](checkpoin…
-
![image](https://github.com/IDEA-Research/Grounded-Segment-Anything/assets/49063302/ac3d7141-62d2-4cda-9fb6-a2e0869513e0)