-
The web demo on the Hugging Face spaces is not working, throwing this error:
```python
Runtime error
ib/python3.8/site-packages (from lvis==0.5.3) (2.8.2)
Requirement already satisfied: six>=1.1…
-
I am using the following command to create a **LVISv0.5 Instance Segmentation video** using the LVISv0.5 configs:
`python demo.py --config-file ../configs/LVISv0.5-InstanceSegmentation/mask_rcnn_X_…
-
Hi! I have a plan to train model on custom dataset, and i faced a format problem:
- Objaverse LVIS (100Gb) has structure like this:
000-***/hash-model-id/{model_render_images}
I have about 50 mod…
-
Hello, I am wondering if there is log file available for the fine tuning on 1% LVIS few shot detection.
-
Great work, and sincerely thanks for the open-source code.
Based on the code, I could reproduce the performance of P-ASL on COCO dataset. However, when I tried to reimplement the results on LVIS dat…
-
Hi, Thanks for sharing the code and model. I've tried the code of RPN you provided and work on building a more generalized rpn. However, I have the following issues and would appreciate your help :
I…
-
I mean, I intuitively thought so. But then I read "The currently supported datasets are COCO, LVIS, Pascal, and Cityscapes. More details and documentation on how to write your own database drivers com…
-
I use eva02_L_lvis_sys_o365 to get the det results, how can I convert class id to class name?
-
Where can I download the pretrained models given in the table at https://github.com/AILab-CVC/YOLO-World/tree/master?tab=readme-ov-file#zero-shot-inference-on-lvis-dataset ?
So the models YoloWorldV2…
-
@clin1223
Hi, thanks for your significant work!
We want to reproduce the COCO zero-shot results In Table 3.
We generate the text embeddings via clip-vit-large-patch14-336. We replace the ZERO_SHO…