-
**Describe the bug**
I've install grounding_sam ML backend and it works, but when i click at actions button like batch predictions tutorial, the options for batch predictions with prompt by grounding…
-
Need to experiment and figure out what happening with MFEM correctly solved electrode potential (far boundary). Why the results of kESI are uniform...
![Image](https://github.com/user-attachments/a…
-
### System Info
- `transformers` version: 4.46.0.dev0
- Platform: Linux-5.15.0-120-generic-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.25.2
- Safetensors versio…
-
Hi folks!
Grounding DINO is now available in the Transformers library, enabling easy inference in a few lines of code.
Here's how to use it:
```python
from transformers import AutoProcessor,…
-
## 🚀 Feature
Currently, the project uses `GroundingDINO` as the visual grounding model which is the best performing model for some benchmark datasets
![current benchmarks for zero-shot object dete…
-
We have this source model from a paper:
![image](https://github.com/user-attachments/assets/44bbe703-8f07-4e9b-a311-71832596b6a8)
It has several natural birth and death processes. The SKEMA model-…
-
Thanks for your great work! May I know whether have you done the VG experiment with image only input?
ZCMax updated
2 weeks ago
-
Hi authors,
Thanks for your great job!
However, for the evaluation in Visual Grounding (Refcoco/+/g), I find that the coordinate of your normalized bbox mismatch with the image processed by LLaV…
-
Hello! Thank you for your work, I would like to ask some questions regarding multiple images predicting using the grounded_sam_demo. I have looked through previous issues, there seem to be some way to…
-
In the markdown file, the grounding dataset can be defined as follows: {"query": "Find ", "response": "", "images": ["/coco2014/train2014/COCO_train2014_000000001507.jpg"], "objects": "[{\"caption\": …