-
-
在跑sam的时候,出现下面这个问题
`Grounded-Segment-Anything/segment_anything/segment_anything/modeling/mask_decoder.py", line 144, in predict_masks
masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).v…
-
Can you provide a more specific guide on how to reproduce your demo with VISAM?
-
https://drive.google.com/drive/folders/1Q9zSM8sCsQ4n-6QKl4GEujuznx-abxvh?usp=drive_link
my images
With the existing script, I only want to mask the battery, but it grabs the base underneath.
Ca…
-
Hi, I am interested VLpart model. Where can I find the paper? Thanks.
-
Hi,
Thanks for your work, I found it very interesting.
I was wondering whether it is possible to get more per-pixel features using your pre-trained model. Currently, using the provided example scr…
-
Greetings,
After experimenting with various resources offered in the repository, I am interested in exploring the capabilities of the Grounding DINO model. Specifically, I am curious if it is feas…
-
Since the SAM model already implemented, we can use text prompts to segment the image with GroundingDINO.
-
什么时候发布Grounded-Segment-Anything的训练代码
-
![企业微信截图_16835361515320](https://user-images.githubusercontent.com/49953067/236783303-c418ef15-39f4-4d33-948c-bb441af70266.png)
![企业微信截图_16835361648085](https://user-images.githubusercontent.com/4995…