jianzongwu / Awesome-Open-Vocabulary

(TPAMI 2024) A Survey on Open Vocabulary Learning
https://arxiv.org/abs/2306.15880
802 stars 44 forks source link

Can models like Grounded Language-Image Pre-training be categorized as open vocabulary object detection? #3

Closed JacobYuan7 closed 1 year ago

lxtGH commented 1 year ago

@JacobYuan7 Hi, Great questions! We have updated GLIP and GLIP-v2 in the next draft of our paper. Personally, I do not think it is strictly open-vocabulary object detection paper. GLIP use Object365 for pretraining. Object365 contains the novel classes that in OV-COCO novel classes.

JacobYuan7 commented 1 year ago

@lxtGH Yes, I agree with you. However, methods utilizing CLIP also encounter those novel classes during pre-training. Why can they be categorized as open vocabulary object detection? (I am not trying to stir controversy. I'm just genuinely curious and seeking clarification.) Thanks in advance!

JacobYuan7 commented 1 year ago

@lxtGH I've also been considering the potential of adding a section titled 'Open Vocabulary Relation Detection'. This is an area gaining growing research interest and could add valuable insights to this work. I've even submitted a simple pull request. However, I want to disclose that my perspective might be biased since I have worked on this topic. I'd greatly appreciate your thoughts on this.

lxtGH commented 1 year ago

@lxtGH Yes, I agree with you. However, methods utilizing CLIP also encounter those novel classes during pre-training. Why can they be categorized as open vocabulary object detection? (I am not trying to stir controversy. I'm just genuinely curious and seeking clarification.) Thanks in advance!

Yes, CLIP itself is trained with many conncepts. However, it is adopted as pretrained weights for classification or initialization.

The difference lies during the fine-tuning, whether the novel labels and boxes can be used for the detector.

Object365 contains the novel box and label (defined in COCO), which is data leakage if using it for pre-training. Thus, this survey mainly focus on the setting proposed by OVR-CNN [1] and ViLD [2].

According to our experience, using this dataset for pretraining, any detectors can achieve SOTA results than any open vocabulary or zero-shot detector on COCO. So it is unfair.

Hope it helps!!

Reference:

[1] OVR-CNN: Open-Vocabulary Object Detection Using Captions, CVPR-2021

[2] ViLD: Open-vocabulary object detection via vision and language knowledge distillation, ICLR-2022

lxtGH commented 1 year ago

@lxtGH I've also been considering the potential of adding a section titled 'Open Vocabulary Relation Detection'. This is an area gaining growing research interest and could add valuable insights to this work. I've even submitted a simple pull request. However, I want to disclose that my perspective might be biased since I have worked on this topic. I'd greatly appreciate your thoughts on this.

Yes, Thanks for your remind, We have added it into our internal version of this survey. The diversity of this direction is large and we miss these directions.

JacobYuan7 commented 1 year ago

@lxtGH Many thanks for the clarification and the inclusion of papers in this field.