-
## 📚 Documentation Issue
Hey ! I'm following the different instructions to install any of these datasets:
[https://github.com/facebookresearch/detectron2/blob/main/datasets/README.md
]
I'm try…
-
大佬,我现在想用LVIS数据集来跑这个代码
LVIS和COCO的标注类似,但是其annotation中没有'iscrowd'这个字段
而在coco.py的第233行有关于这个字段的处理,请问如何修正?
if annotation['iscrowd']:
# Use negative class ID for crowd…
-
您论文中 GroundingDINO-T 在 LVIS 上 MiniVal 的 AP 值和 GroundingDINO 论文中不一致
-
could you share the weights for us to fine-tuning? thanks
-
Why do the same prompt texts have results when used in the hugging face demo, but not in the yolo world program, Which configuration file and weights are used in the hugging face demo。
-
Hello, thank you for your outstanding work!
I would like to perform video inference directly using yolo_world, and I have used Roboflow Inference and Supervision, but they only provide some benchmar…
-
-
**Describe the bug**
training on COCO dataset is ok, but when I train on LVIS meet this bug.
**Environment**
1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment inf…
-
Hello, thanks for the awesome collection of demos and code.
I wonder if you have benchmarks or comparisons of the text grounding segmentation capabilities of GroundingDino vs Florence-2? While I've b…
-
I use the code “bash tools/dist_test.sh configs/inference/clip_end2end_faster_rcnn_r50_c4_1x_lvis_v0.5.py epoch_12_end2end_coco.pth 2 --eval bbox”,epoch_12_end2end_coco.pth is from the model you pro…