AILab-CVC / YOLO-World

[CVPR 2024] Real-Time Open-Vocabulary Object Detection
https://www.yoloworld.cc
GNU General Public License v3.0
3.9k stars 378 forks source link

Roadmap of YOLO-World #109

Open wondervictor opened 3 months ago

wondervictor commented 3 months ago

This issue will be kept open and pinned for a long time, as we hope to hear everyone's opinions, suggestions, and needs! We want to make YOLO-World stronger and encourage more diverse applications, especially practical ones. We maintain an open and free attitude. YOLO-World is currently in active development and improvement, and we are trying our best to do well in upstream pre-training and downstream deployment tools. At present, our manpower is limited, so we hope you can give us some time and contribute your experience or help when you can!

If you have a good idea or need, just reply to this issue and @ me. I will respond promptly when I see it, and consider adding it to the TODO list.

这个issue将会长时间保持开放并置顶,因为我们希望听到大家的意见、建议和需求! 我们希望让YOLO-World变得更强大,并鼓励更多样化的应用,尤其是实际应用。我们保持开放和自由的态度。YOLO-World目前正处于积极的开发和改进阶段,我们正在尽最大努力做好上游预训练和下游部署工具。目前,我们的人力有限,因此希望大家能给我们一些时间,并在可以的时候贡献您的经验或帮助!

如果您有好的想法或需求,请回复此问题并@我。我看到后会及时回应,并考虑将其加入待办事项列表。

TODO List (Community Version)

🎯: High priority or on-going.

taofuyu commented 3 months ago

torch.einsum() should be replaced by torch.matmul() and torch.sum(), because einsum() is not supported by most edge devices. For example, I rewrite the code: x = torch.einsum('bchw,bkc->bkhw', x, w) to batch, channel, height, width = x.shape _, k, _ = w.shape x = x.permute(0, 2, 3, 1) # bchw->bhwc x = x.reshape(batch, -1, channel) # bhwc->b(hw)c w = w.permute(0, 2, 1) # bkc->bck x = torch.matmul(x, w) x = x.reshape(batch, height, width, k) x = x.permute(0, 3, 1, 2) Maybe it is ugly, but it can be deployed. @wondervictor

wondervictor commented 3 months ago

@taofuyu Good idea, Got it!

mio410 commented 3 months ago

@wondervictor May I ask where should I modify if I want to try using the effect of other text encoders, such as changing the text encoder of CLIP to BEIT-3. Thank you!

wondervictor commented 3 months ago

@mio410 Good idea, we do plan to use better and stronger text encoders (e.g., CLIP-Large) now and we are queuing for computation resources to pre-train it. BEIT-3 is a good choice and we are considering it. BTW, what model size are you most in need of currently? I can prioritize that.

mio410 commented 3 months ago

@wondervictor May I ask where should I modify if I want to try using the effect of other text encoders, such as changing the text encoder of CLIP to BEIT-3. Thank you! Besides, I'd like to try using a CLIP model in a different language to see if I can use prompts in that language for open vocabulary detection. Is this possible?

mio410 commented 3 months ago

@mio410 Good idea, we do plan to use better and stronger text encoders (e.g., CLIP-Large) now and we are queuing for computation resources to pre-train it. BEIT-3 is a good choice and we are considering it. BTW, what model size are you most in need of currently? I can prioritize that.

I'm looking forward to your work! If possible, I'd like to try open vocabulary detection in other languages. Could you help me with that?

taofuyu commented 3 months ago

@wondervictor May I ask where should I modify if I want to try using the effect of other text encoders, such as changing the text encoder of CLIP to BEIT-3. Thank you!

here

dikapiliao1 commented 3 months ago

Yolo World is based on the word embedding of clip for reparameterization. If we could replace clip with a larger model similar to ChatGPT4, would it understand more? similar to Sora's powerful ability to understand images.

wondervictor commented 3 months ago

Yolo World is based on the word embedding of clip for reparameterization. If we could replace clip with a larger model similar to ChatGPT4, would it understand more? similar to Sora's powerful ability to understand images.

Hi @dikapiliao1, it's a nice idea and we plan to do it.

xianhonghuang commented 3 months ago

如果我想要更改不同的視覺的backbone要在哪裡可以更改?

wondervictor commented 3 months ago

如果我想要更改不同的視覺的backbone要在哪裡可以更改?

@xianhonghuang replace the image_model config according to your demand:

backbone=dict(
    _delete_=True,
    type='MultiModalYOLOBackbone',
    image_model={{_base_.model.backbone}},
    text_model=dict(
        type='HuggingCLIPLanguageBackbone',
        model_name=text_model_name,
        frozen_modules=['all'])),
xianhonghuang commented 3 months ago

如果我想更改不同的主幹線要在哪裡可以更改?

@xianhonghuangimage_model根據您的需求替換配置:

backbone=dict(
    _delete_=True,
    type='MultiModalYOLOBackbone',
    image_model={{_base_.model.backbone}},
    text_model=dict(
        type='HuggingCLIPLanguageBackbone',
        model_name=text_model_name,
        frozen_modules=['all'])),

像是更改base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py')這部分嗎? 我想要先更改成Yolov7的backbone

wondervictor commented 3 months ago

Hi @xianhonghuang, you can directly override the backbone dictionary configs, e.g., change it to YOLOv7Backbone. BTW, it's suggested to open a new issue to discuss this question and this issue aims for new features and suggestions.

RudyCheng commented 2 months ago

config:yolo_world_v2_xl_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_minival.py is not suit for its model weights

wondervictor commented 2 months ago

@RudyCheng, it has been resolved.

xiyuan27 commented 2 months ago

[target detection on document images], Are there any specialized optimization strategies or support for target detection in vertical domains, specifically for document images such as invoices and passports?

thgpddl commented 1 week ago

请问为什么image_demo.py输入的text经过","分割后,还要追加一个空字符串呢,加入text=cat,dog,man,经过代码处理后text=cat,dog,man," "