-
In the paper, you state that you evaluate models trained on Task 1 on the 4,810 validation images of LVIS v1.0, however, there are 19809 images in the validation set of LVIS v1.0. Will you release th…
-
I pre-trained the S model using configs/pretrain/yolo_world_s_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_minival.py and found a ~1% difference compared to the log in the repo.
`
Averag…
-
Thanks to the authors for open sourcing such a excellent project. When I reproduce YOLO-World-v2-L, the last lvis/bbox_AP in `yolo_world_v2_m_o365_goldg_pretrain_part_2.log`, is 23.50, but AP_mini in …
-
你好,请问如果要复现实验,大约需要多少计算资源呢
-
I have tried many different settings, but during the training of segrefiner, the IoU becomes zero after only a few steps. Is this a known issue?
Thanks.
![image](https://github.com/MengyuWang826…
-
Just want to share how to run on the latest version of Detectron2 (v0.6):
## 1. Environment
CUDA 11.1
Torch>=1.9.0 ```pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 …
-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### YOLOv8 Component
_No response_
### Bug
…
-
Could you share the sampled images for Moe training, i.e. the images for Stage II SViT-157k, LVIS-220k LRV-331k, MIMIC-IT-256k
-
Hi,
Thanks for your amazing work.
I was trying image_demo.py but facing syntax error. Using Colab for implementation.
Command used is
```
!python image_demo.py /content/yolo_world_seg_l_dual_…
-
I have another question regarding the dataset building that will feed the prototypes of novels classes.
My use case is to annotate new images with already annotated images that I will use to build my…