CVMI-Lab / CoDet

(NeurIPS2023) CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection
104 stars 7 forks source link

How to run CoDet on custom dataset? #13

Closed LinSY546749 closed 5 months ago

LinSY546749 commented 5 months ago

Can you give me some guidance on preparing my own dataset? For example, how can I generate cococap_clip_a+cname.npy and coco_clip_a+cname.npy for my own dataset?

machuofan commented 5 months ago

Sure, you can run python tools/dump_clip_features.py --ann {custom_annotation.json} to get your custom_clip_a+cname.npy. But make sure that custom_annotation.json follows the coco annotation format.

Besides, we provide instructions on how to prepare CC3M for detection training, which can also be generalized to process customized image-text pairs.

LinSY546749 commented 5 months ago

Thank you for your patience! Should I use all categories to generate cococap_clip_a+cname.npy while using base categories to generate coco_clip_a+cname.npy? By the way, could you provide the code for generating instances_train2017_seen_2.json and instances_val2017_all_2.json?

machuofan commented 5 months ago

You need to use all categories in COCO to generate coco_clip_a+cname.npy.

Here is the code to generate instances_train2017_seen_2.json (from repo of OVR-CNN).

LinSY546749 commented 5 months ago

Emmmmm, what's the difference between cococap_clip_a+cname.npy and coco_clip_a+cname.npy?

machuofan commented 5 months ago

Well, cococap_clip_a+cname.npy contains over 600 object categories parsed from COCO caption, while coco_clip_a+cname.npy only contains 80 object categories defined in COCO detection.

LinSY546749 commented 5 months ago

I got it. Thank you!