MendelXu / SAN

Open-vocabulary Semantic Segmentation
https://mendelxu.github.io/SAN/
MIT License
295 stars 27 forks source link

训练自己的数据集 #39

Closed HarryMark927 closed 8 months ago

HarryMark927 commented 9 months ago

作者你好!抱歉打扰 想请教一下如果我想在自己的数据集上进行训练和测试,我需要在哪几个文件上进行修改呢?

MendelXu commented 9 months ago
  1. Add a script to define your dataset in san/data/datasets. You can refer to https://github.com/MendelXu/SAN/blob/main/san/data/datasets/register_voc.py. Remember to import it in san/data/datasets/__init__.py.
  2. Train the model with python train_net.py --eval-only --config-file <CONFIG_FILE> --num-gpus <NUM_GPU> OUTPUT_DIR <OUTPUT_PATH> MODEL.WEIGHTS <TRAINED_MODEL_PATH> MODEL.SAN.NUM_CLASSES [NUM_OF_CLASS].

Hint To make the training process work properly, please check the batch data before training for a long time (You can add a breakpoint Here .

HarryMark927 commented 9 months ago

sorry to bother you again. I run the command "python train_net.py --config-file ./configs/san_clip_vit_res4_coco.yaml --num-gpus 1 OUTPUT_DIR ./log"

but error with " Traceback (most recent call last): File "/data2/borong/SAN/train_net.py", line 280, in launch( File "/data2/borong_anaconda/envs/san/lib/python3.9/site-packages/detectron2/engine/launch.py", line 82, in launch main_func(args) File "/data2/borong/SAN/train_net.py", line 271, in main trainer = Trainer(cfg) File "/data2/borong_anaconda/envs/san/lib/python3.9/site-packages/detectron2/engine/defaults.py", line 378, in init data_loader = self.build_train_loader(cfg) File "/data2/borong/SAN/train_net.py", line 94, in build_train_loader return build_detection_train_loader(cfg, mapper=mapper) File "/data2/borong_anaconda/envs/san/lib/python3.9/site-packages/detectron2/config/config.py", line 207, in wrapped explicit_args = _get_args_from_config(from_config, args, *kwargs) File "/data2/borong_anaconda/envs/san/lib/python3.9/site-packages/detectron2/config/config.py", line 245, in _get_args_from_config ret = from_config_func(args, kwargs) File "/data2/borong/SAN/san/data/build.py", line 161, in _train_loader_from_config dataset = get_detection_dataset_dicts( File "/data2/borong/SAN/san/data/build.py", line 124, in get_detection_dataset_dicts dataset_dicts = [ File "/data2/borong/SAN/san/data/build.py", line 125, in wrap_metas(DatasetCatalog.get(dataset_name), dataset_name=dataset_name) File "/data2/borong_anaconda/envs/san/lib/python3.9/site-packages/detectron2/data/catalog.py", line 58, in get return f() File "/data2/borong/SAN/san/data/datasets/register_coco_stuff_164k.py", line 212, in lambda x=image_dir, y=gt_dir: load_sem_seg( File "/data2/borong_anaconda/envs/san/lib/python3.9/site-packages/detectron2/data/datasets/coco.py", line 266, in load_sem_seg (os.path.join(image_root, f) for f in PathManager.ls(image_root) if f.endswith(image_ext)), File "/data2/borong_anaconda/envs/san/lib/python3.9/site-packages/iopath/common/file_io.py", line 1294, in ls return self.__get_path_handler(path)._ls(path, kwargs) File "/data2/borong_anaconda/envs/san/lib/python3.9/site-packages/iopath/common/file_io.py", line 714, in _ls return os.listdir(self._get_path_with_cwd(path)) FileNotFoundError: [Errno 2] No such file or directory: 'datasets/coco/train2017' " I checked the code but I did not find how to replace the coco dataset with my dataset that I want to train, and I have already prepared my dataset as you said like voc. And my propose is to train my own dataset in your model and later test it.

MendelXu commented 9 months ago

python train_net.py --eval-only --config-file <CONFIG_FILE> --num-gpus <NUM_GPU> OUTPUT_DIR <OUTPUT_PATH> MODEL.WEIGHTS <TRAINED_MODEL_PATH> MODEL.SAN.NUM_CLASSES [NUM_OF_CLASS] DATASETS.TRAIN <YOUR_DATASET_NAME>