-
你好,请问online demo里面用到的模型配置和权重文件分别是哪个呢?
-
Thanks for your great work!
I wants to evaluate the performance of yolo_world_s_clip_base_dual_vlpan_2e-3adamw_32xb16_100e_o365_goldg_train_pretrained-18bea4d2.pth on val set of obj365v1.
I modi…
-
Hello, thanks for your work! I am a noob and really insterested at your work.
I encountered an error about the connection to huggingface, how could I deal with it?
(yoloworld) root@fdb7e138bfe8:~/…
-
各位大大,
我运行./tools/dist_train.sh configs/pretrain/yolo_world_v2_x_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_minival.py 1 --amp这段想进行训练, 结果出现这个报错,是跟我在windows下训练有关系吗。
[W socket.cpp:697] [c10d]…
-
1. 想在端侧部署推理服务,可以移除clip的text model吗?2. 推理时的offline 词汇表是指推理时已经编码过的词汇不需要重新encode吗?3. 预训练模型中yolo-world-l-clip和yolo-world-l有啥区别呀
-
复现的YOLO-Worldv2-L pretrain模型评测指标略低于作者提供的模型
对应model zoo的这一行:
![image](https://github.com/AILab-CVC/YOLO-World/assets/17638735/fc2ff326-1d9e-47da-b804-918938f2b807)
使用的配置文件yolo_world_v2_l_vlpan_b…
-
opencv-python
opencv-python-headless
与
albumentations
对应都是什么版本? 按basic_requirements.txt装完之后,会报错,按咱们issu里提示albumentations切换成1.4.4版本,微调还是报错,官方能不能提供一个打好包的docker镜像啊,安装花费了很多时间,还没成功。
-
I am trying to use a video frame as the input. However, i found that the code uses [image path](https://github.com/AILab-CVC/YOLO-World/blob/3264b61a03b073852b1559fa896cb12c6ff1aa41/image_demo.py#L79C…
-
![image](https://github.com/AILab-CVC/YOLO-World/assets/18739352/eb7362cc-cead-42cc-a596-ff4d9dfc905e)
ERROR: input_onnx_file_path: work_dirs/yolo_world_v2_s_vlpan_bn_2e-4_80e_8gpus_mask-refine_finet…
-
I rush into the same question like before, #71 , #78 .
I modify the config in configs/prompt_tuning_coco/, generate custom embedding file, to fine-tune my dataset which has 4 categories.
When infer…