IDEA-Research / Grounded-Segment-Anything

Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
https://arxiv.org/abs/2401.14159
Apache License 2.0
14.85k stars 1.37k forks source link

SAM predictor Error #80

Open lxianl455 opened 1 year ago

lxianl455 commented 1 year ago

When I try to get a mask, I got an error: Traceback (most recent call last): File "grounded_samdemo.py", line 206, in masks, , _ = predictor.predict_torch( File "/data0/lxl/environment/anaconda3_22_5_11/envs/try_segment/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "/data1/lxl/Grounded-Segment-Anything/segment_anything/segment_anything/predictor.py", line 229, in predict_torch low_res_masks, iou_predictions = self.model.mask_decoder( File "/data0/lxl/environment/anaconda3_22_5_11/envs/try_segment/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(input, *kwargs) File "/data1/lxl/Grounded-Segment-Anything/segment_anything/segment_anything/modeling/mask_decoder.py", line 94, in forward masks, iou_pred = self.predict_masks( File "/data1/lxl/Grounded-Segment-Anything/segment_anything/segment_anything/modeling/mask_decoder.py", line 147, in predict_masks masks = (hyper_in @ upscaled_embedding.view(b, c, h w)).view(b, -1, h, w) RuntimeError: cannot reshape tensor of 0 elements into shape [0, -1, 256, 256] because the unspecified dimension size -1 can be any value and is ambiguous

styfeng commented 1 year ago

same error. any solutions?

wufeim commented 1 year ago

boxes_filt is empty:

# run grounding dino model
boxes_filt, pred_phrases = get_grounding_output(
    model, image, text_prompt, box_threshold, text_threshold, device=device
)
piclez commented 3 weeks ago

https://github.com/IDEA-Research/Grounded-Segment-Anything/issues/100#issuecomment-1506519629