@jwyang Hello! That's a great job. I am trying to use the zero_shot_inference function on my own dataset. Actually, I have got the concept embedding by using "extract_concept_features.py". And I also changed the path "MODEL.CLIP.TEXT_EMB_PATH" in the code.
And I try to define my own dataset by using the following codes:
from detectron2.data.datasets import register_coco_instances
register_coco_instances("chv_dataset", {}, "/content/drive/MyDrive/RegionCLIP/datasets/CHV/annotations/train.json", "/content/drive/MyDrive/RegionCLIP/datasets/CHV/images")
Then I check whether I define it or not:
from detectron2.data import DatasetCatalog
dataset_names = DatasetCatalog.list()
print(dataset_names)
However, when I tried to use the zero-shot-inference example (test_zeroshot_inference.sh) to evaluate my own dataset.
# RN50x4, GT, COCO
# I changed the "NUM_CLASSES" in CLIP_fast_rcnn_R_50_C4_ovd_zsinf.yaml and changed the "DATASETS.TEST" in CLIP_fast_rcnn_R_50_C4_ovd.yaml
!python3 ./tools/train_net.py \
--eval-only \
--num-gpus 0 \
--config-file ./configs/COCO-InstanceSegmentation/CLIP_fast_rcnn_R_50_C4_ovd_zsinf.yaml \
MODEL.WEIGHTS ./pretrained_ckpt/regionclip/regionclip_pretrained-cc_rn50x4.pth \
MODEL.CLIP.TEXT_EMB_PATH /content/drive/MyDrive/RegionCLIP/output/concept_feats/CHV_datasets_6_rn50x4.pth \
MODEL.CLIP.CROP_REGION_TYPE GT \
MODEL.CLIP.MULTIPLY_RPN_SCORE False \
MODEL.CLIP.TEXT_EMB_DIM 640 \
MODEL.RESNETS.DEPTH 200 \
MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION 18
I got the following issues:
Traceback (most recent call last):
File "/content/drive/MyDrive/RegionCLIP/detectron2/data/catalog.py", line 51, in get
f = self[name]
File "/usr/lib/python3.10/collections/init.py", line 1106, in getitem
raise KeyError(key)
KeyError: 'chv_dataset'
@jwyang Hello! That's a great job. I am trying to use the zero_shot_inference function on my own dataset. Actually, I have got the concept embedding by using "extract_concept_features.py". And I also changed the path "MODEL.CLIP.TEXT_EMB_PATH" in the code.
And I try to define my own dataset by using the following codes:
Then I check whether I define it or not:
The results shows:
['coco_2014_train', 'coco_2014_val', …… , 'chv_dataset']
However, when I tried to use the zero-shot-inference example (test_zeroshot_inference.sh) to evaluate my own dataset.
I got the following issues:
Traceback (most recent call last): File "/content/drive/MyDrive/RegionCLIP/detectron2/data/catalog.py", line 51, in get f = self[name] File "/usr/lib/python3.10/collections/init.py", line 1106, in getitem raise KeyError(key) KeyError: 'chv_dataset'
KeyError: "Dataset 'chv_dataset' is not registered! Available datasets are: coco_2014_train, coco_2014_val, coco_2014_minival, coco_2014_minival_100, coco_2014_valminusminival, coco_2017_train, coco_2017_val, coco_2017_test, coco_2017_test-dev, coco_2017_val_100, coco_2017_ovd_all_train, coco_2017_ovd_b_train, coco_2017_ovd_t_train, coco_2017_ovd_all_test, coco_2017_ovd_b_test, coco_2017_ovd_t_test, keypoints_coco_2014_train, keypoints_coco_2014_val, keypoints_coco_2014_minival, keypoints_coco_2014_valminusminival, keypoints_coco_2014_minival_100, keypoints_coco_2017_train, keypoints_coco_2017_val, keypoints_coco_2017_val_100, coco_2017_train_panoptic_separated, coco_2017_train_panoptic_stuffonly, coco_2017_train_panoptic, coco_2017_val_panoptic_separated, coco_2017_val_panoptic_stuffonly, coco_2017_val_panoptic, coco_2017_val_100_panoptic_separated, coco_2017_val_100_panoptic_stuffonly, coco_2017_val_100_panoptic, lvis_v1_train, lvis_v1_val, lvis_v1_test_dev, lvis_v1_test_challenge, lvis_v1_train_custom_img, lvis_v1_val_custom_img, lvis_v1_test_dev_custom_img, lvis_v1_test_challenge_custom_img, lvis_v1_train_fullysup, lvis_v1_val_fullysup, lvis_v1_test_dev_fullysup, lvis_v1_test_challenge_fullysup, lvis_v0.5_train, lvis_v0.5_val, lvis_v0.5_val_rand_100, lvis_v0.5_test, lvis_v0.5_train_cocofied, lvis_v0.5_val_cocofied, cityscapes_fine_instance_seg_train, cityscapes_fine_sem_seg_train, cityscapes_fine_instance_seg_val, cityscapes_fine_sem_seg_val, cityscapes_fine_instance_seg_test, cityscapes_fine_sem_seg_test, cityscapes_fine_panoptic_train, cityscapes_fine_panoptic_val, voc_2007_trainval, voc_2007_train, voc_2007_val, voc_2007_test, voc_2012_trainval, voc_2012_train, voc_2012_val, ade20k_sem_seg_train, ade20k_sem_seg_val"
How can I solve this problem? Or if I want to use my own datasets, what should I do?