Open wondervictor opened 3 months ago
torch.einsum() should be replaced by torch.matmul() and torch.sum(), because einsum() is not supported by most edge devices.
For example, I rewrite the code:
x = torch.einsum('bchw,bkc->bkhw', x, w)
to
batch, channel, height, width = x.shape
_, k, _ = w.shape
x = x.permute(0, 2, 3, 1) # bchw->bhwc
x = x.reshape(batch, -1, channel) # bhwc->b(hw)c
w = w.permute(0, 2, 1) # bkc->bck
x = torch.matmul(x, w)
x = x.reshape(batch, height, width, k)
x = x.permute(0, 3, 1, 2)
Maybe it is ugly, but it can be deployed.
@wondervictor
@taofuyu Good idea, Got it!
@wondervictor May I ask where should I modify if I want to try using the effect of other text encoders, such as changing the text encoder of CLIP to BEIT-3. Thank you!
@mio410 Good idea, we do plan to use better and stronger text encoders (e.g., CLIP-Large) now and we are queuing for computation resources to pre-train it. BEIT-3 is a good choice and we are considering it. BTW, what model size are you most in need of currently? I can prioritize that.
@wondervictor May I ask where should I modify if I want to try using the effect of other text encoders, such as changing the text encoder of CLIP to BEIT-3. Thank you! Besides, I'd like to try using a CLIP model in a different language to see if I can use prompts in that language for open vocabulary detection. Is this possible?
@mio410 Good idea, we do plan to use better and stronger text encoders (e.g., CLIP-Large) now and we are queuing for computation resources to pre-train it. BEIT-3 is a good choice and we are considering it. BTW, what model size are you most in need of currently? I can prioritize that.
I'm looking forward to your work! If possible, I'd like to try open vocabulary detection in other languages. Could you help me with that?
@wondervictor May I ask where should I modify if I want to try using the effect of other text encoders, such as changing the text encoder of CLIP to BEIT-3. Thank you!
Yolo World is based on the word embedding of clip for reparameterization. If we could replace clip with a larger model similar to ChatGPT4, would it understand more? similar to Sora's powerful ability to understand images.
Yolo World is based on the word embedding of clip for reparameterization. If we could replace clip with a larger model similar to ChatGPT4, would it understand more? similar to Sora's powerful ability to understand images.
Hi @dikapiliao1, it's a nice idea and we plan to do it.
如果我想要更改不同的視覺的backbone要在哪裡可以更改?
如果我想要更改不同的視覺的backbone要在哪裡可以更改?
@xianhonghuang replace the image_model
config according to your demand:
backbone=dict(
_delete_=True,
type='MultiModalYOLOBackbone',
image_model={{_base_.model.backbone}},
text_model=dict(
type='HuggingCLIPLanguageBackbone',
model_name=text_model_name,
frozen_modules=['all'])),
如果我想更改不同的主幹線要在哪裡可以更改?
@xianhonghuang
image_model
根據您的需求替換配置:backbone=dict( _delete_=True, type='MultiModalYOLOBackbone', image_model={{_base_.model.backbone}}, text_model=dict( type='HuggingCLIPLanguageBackbone', model_name=text_model_name, frozen_modules=['all'])),
像是更改base = ('../../third_party/mmyolo/configs/yolov8/' 'yolov8_l_syncbn_fast_8xb16-500e_coco.py')這部分嗎? 我想要先更改成Yolov7的backbone
Hi @xianhonghuang, you can directly override the backbone dictionary configs, e.g., change it to YOLOv7Backbone. BTW, it's suggested to open a new issue to discuss this question and this issue aims for new features and suggestions.
config:yolo_world_v2_xl_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_minival.py is not suit for its model weights
@RudyCheng, it has been resolved.
[target detection on document images], Are there any specialized optimization strategies or support for target detection in vertical domains, specifically for document images such as invoices and passports?
请问为什么image_demo.py输入的text经过","分割后,还要追加一个空字符串呢,加入text=cat,dog,man,经过代码处理后text=cat,dog,man," "
This issue will be kept open and pinned for a long time, as we hope to hear everyone's opinions, suggestions, and needs! We want to make YOLO-World stronger and encourage more diverse applications, especially practical ones. We maintain an open and free attitude. YOLO-World is currently in active development and improvement, and we are trying our best to do well in upstream pre-training and downstream deployment tools. At present, our manpower is limited, so we hope you can give us some time and contribute your experience or help when you can!
If you have a good idea or need, just reply to this issue and @ me. I will respond promptly when I see it, and consider adding it to the TODO list.
TODO List (Community Version)
🎯: High priority or on-going.
torch.enisum
(👍 thank @taofuyu for #118)mask-refine
(#160 #72 #76).