-
I was trying to run the instance segmentation Co-DETR model pre-trained on Objects365 for LVIS. For this I used this config _projects/configs/co_dino_vit/co_dino_5scale_lsj_vit_large_lvis_instance.py_…
-
what is the purpose of CENTERNET.IGNORE_HIGH_FP ?
In the heatmap_focal_loss code, it seems to ignore the loss of high FP ? Isn't the loss of high FP more important?
` if ignore_high_fp > 0:
…
-
chmod +x tools/dist_train.sh
# sample command for pre-training, use AMP for mixed-precision training
./tools/dist_train.sh configs/pretrain/yolo_world_l_t2i_bn_2e-4_100e_4x8gpus_obj365v1_goldg_train…
-
Would you please explain how instances_train2017_seen_2_oriorder_cat_info.json was generated or provide a link for it?
-
如图所示,YOLO-world-L在O365+GoldG的zero-shot on LVISmini 是 35.0 27.1 32.8 38.3,但是在下图消融中,同样的配置得到的结果为: 32.5 22.3 30.6 36.0。这是为什么?
-
Hi authors, thanks for the awesome work! I noticed that in Objaverse paper you mentioned there are 10K homes generated with ProcTHOR populated with objects from OBJAVERSE-LVIS. I wonder if the scripts…
-
Hi,
Is there any plan to release the pretrained models and training code for the Detic model in Table-2 (Open-vocabulary LVIS compared to ViLD)?
Thank you
-
I export the onnx model of yolow v2 using the huggingface demo, use the lvis categories and remove the NMS by making postprocess_cfg=None. The FPS of yoloworld v2 large is 22.8, lower than the report…
-
![1](https://github.com/Sense-X/Co-DETR/assets/123863030/5600b770-cb61-4519-8fae-98835191ca64)
-
Hello, the author, great job! I found that Dino performs very well on the coco dataset. But if I want to transfer the model to a dataset with thousands or tens of thousands of categories, such as the …