-
###
First I was using the fine-tuning documentation to fine-tune it myself,
Refactoring "yolo_world_v2_s_vlpan_bn_2e-4_80e_8gpus_mask _refine_finetune_coco.pth" from the coco dataset,
but using …
-
I found the precision gap of yolo-world-v1 between git and paper. See below:
![yolo-world-v1-github](https://github.com/user-attachments/assets/cfb4ef66-373e-46f1-9ff7-2cae0105f69b)
![yolo-world-v1-…
-
Hi @wondervictor and team, thank you for the great work.
Could you help me on a question regarding the tflite model demo/inference?
I first convert the _yolo_world_v2_s_obj365v1_goldg_pretrain_128…
-
Hello, thank you for your outstanding work!
I would like to perform video inference directly using yolo_world, and I have used Roboflow Inference and Supervision, but they only provide some benchmar…
-
Why do the same prompt texts have results when used in the hugging face demo, but not in the yolo world program, Which configuration file and weights are used in the hugging face demo。
-
环境:Python 3.10.9 Windows 11 23H2 22631.3447
**When loading the graph, the following node types were not found:
ESAM_ModelLoader_Zho
Yoloworld_ESAM_Zho
Yoloworld_ModelLoader_Zho**
查看启动器的log,有:…
-
I use `deploy/export_onnx.py` export onnx model, the command as follows:
`python deploy/export_onnx.py configs/finetune_coco/yolo_world_v2_m_vlpan_bn_2e-4_80e_8gpus_mask-refine_finetune_coco.py work_…
-
I want to export an onnx model with clip and yolov8 backbone;
configs:
YOLO-World/configs/pretrain/yolo_world_v2_l_clip_large_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_800ft_lvis_minival.py
Y…
-
我本地的环境:
torch=2.0.1+cu118
mmdet=3.0
mmcv=2.0.1
mmyolo=0.6.0
运行python export_onnx.py,报错的日志如下:
Traceback (most recent call last):
File "/media/bowen/6202c499-4f0a-4280-af7e-d2ab4b6c74dd/home/…
-
用1K数据量微调yolo-world-L,v1效果正常相对于zero-shot提升较多,v2上微调后居然打不过zero-shot,为什么?v2的结构原因吗?