ZiqinZhou66 / ZegCLIP

Official implement of CVPR2023 ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic Segmentation
MIT License
195 stars 17 forks source link

may i test my images? not train ,how to do ? #6

Open zhanghongyong123456 opened 1 year ago

ZiqinZhou66 commented 1 year ago

I appreciate your interest in our work.

Yes, you can test your own images by the following steps:

  1. Download our pre-trained model
  2. Preparing your dataset in MMSeg format
  3. Preparing a data loader file and put it in 'configs/base/datasets/dataloader/'
  4. Preparing a data config file and put it in 'configs/base/datasets/'
  5. Preparing target testing config file and put it in 'configs/yourData/'
  6. run test.py with your config file
aliman80 commented 1 year ago

Hi many thanks for your quick response, I did the following before the experiment;

  1. In configs/base/datasets/cocostuff_512x512.py on line no 3 i updated the path data_root = 'Path/to/data/coco_stuff164k'
  2. then in configs/coco/vpt_seg_fully_vit-b_512x512_80k_12_100_multi.py i updated the path pretrained = 'Path/to/pretrained/ViT-B-16.pt' with your pretrained model.
  3. Then in https://github.com/ZiqinZhou66/ZegCLIP/blob/main/configs/_base_/models/zegclip.py line 9 pretrained='Path/to/pre trained/RN50.pt' I downloaded weights from the internet and updated this path
  4. then on https://github.com/ZiqinZhou66/ZegCLIP/blob/main/configs/_base_/datasets/dataloader/coco_stuff.py line 108 i changed the suffix to png in place of _labelTrainIds.png.
  5. Then i ran the validation experiment.