Closed Maritime-Moon closed 3 months ago
I'm running it on PyCharm on Windows
Use the CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10001 --nproc_per_node=4 tools/detector_pretrain_net.py --config-file "E:/learn_code/SGG-Benchmark-main/configs/e2e_relation_detector_X_101_32_8_FPN_1x.yaml" SOLVER.IMS_PER_BATCH 8 TEST.IMS_PER_BATCH 4 DTYPE "float16" SOLVER.MAX_ITER 50000 SOLVER.STEPS "(30000, 45000)" SOLVER.VAL_PERIOD 2000 SOLVER.CHECKPOINT_PERIOD 2000 MODEL.RELATION_ON False OUTPUT_DIR E:/learn_code/SGG-Benchmark-main/checkpoints/pretrained_faster_rcnn SOLVER.PRE_VAL False command in the terminal and I'm getting the followign error: CUDA_VISIBLE_DEVICES=0 : 无法将“CUDA_VISIBLE_DEVICES=0”项识别为 cmdlet、函数、脚本文件或可运行程序的名称。请检查名称的拼写,如果包括路径,请确保路径正确,然后再试一次。 所在位置 行:1 字符: 1
+ CategoryInfo : ObjectNotFound: (CUDA_VISIBLE_DEVICES=0:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
Yes you have to replace the MODEL.BACKBONE.CONV_BODY key by MODEL.BACKBONE.TYPE in your .yaml file and it should work. I replaced the key a few months ago but forgot to modify every config files.
I would like to know if the datasets you use are processed yourself?If so, let me know how you changed the training dataset.
Is the datasets folder in your project a folder for training data?
DATASETS = { "VG150": { "img_dir": IMG_DIR+"VG_100K", "roidb_file": DATA_DIR+"datasets/VG150/VG-SGG-with-attri.h5", "dict_file": DATA_DIR+"datasets/VG150/VG-SGG-dicts-with-attri.json", "image_file": DATA_DIR+"datasets/vg/image_data.json", "zeroshot_file": DATA_DIR+"datasets/VG150/zeroshot_triplet.pytorch", "informative_file": "", #DATA_DIR+"datasets/informative_sg.json", }, "PSG": { "img_dir": "/home/maelic/Documents/Datasets/COCO/", "ann_file": DATA_DIR+"datasets/psg/psg_train_val.json", "informative_file": "", #DATA_DIR+"datasets/informative_sg.json", }, "VrR-VG_filtered_with_attribute": { "img_dir": IMG_DIR+"VG_100K", "roidb_file": "VG/VrR-VG/VrR_VG-SGG-with-attri.h5", "dict_file": "VG/VrR-VG/VrR_VG-SGG-dicts-with-attri.json", "image_file": "VG/VrR-VG/image_data.json", "capgraphs_file": "VG/vg_capgraphs_anno.json", }, "VG_indoor_filtered": { "img_dir": IMG_DIR+"VG_100K", "roidb_file": DATA_DIR+"datasets/IndoorVG_4/VG-SGG-augmented-penet-cat.h5", "dict_file": DATA_DIR+"datasets/IndoorVG_4/VG-SGG-dicts.json", "image_file": DATA_DIR+"datasets/vg/image_data.json", "zeroshot_file": DATA_DIR+"datasets/IndoorVG_4/zero_shot_triplets.pytorch", "informative_file": DATA_DIR+"datasets/informative_sg.json", }, "VG178": { "img_dir": IMG_DIR+"VG_100K", "roidb_file": DATA_DIR+"VG178/VG-SGG.h5", "dict_file": DATA_DIR+"VG178/VG-SGG-dicts.json", "image_file": DATA_DIR+"vg/image_data.json", "zeroshot_file": DATA_DIR+"VG178/zero_shot_triplets.pytorch", "informative_file": DATA_DIR+"datasets/informative_sg.json", }, } I've found that a lot of the files don't seem to exist in your project.
I would like to know if the datasets you use are processed yourself?If so, let me know how you changed the training dataset.
I am using the VG150 dataset from the .h5 file created by Kaihua (see https://github.com/KaihuaTang/Scene-Graph-Benchmark.pytorch/blob/master/DATASET.md).
For the IndoorVG dataset, I pre-processed the original Visual Genome files using some scripts that you can find in another of my repository. Using those scripts you can extract from the messy and noisy annotations of Visual Genome annotations from the classes you want. I released those scripts but I am not planning on doing any documentation for the moment, sorry for that.
DATASETS: TRAIN: ("VG_stanford_filtered_with_attribute_train",) VAL: ("VG_stanford_filtered_with_attribute_val",) TEST: ("VG_stanford_filtered_with_attribute_test",) Which dataset is this using? Does it correspond to the VG150 of DATASETS in DatasetCatalog?
Yes this corresponds to the VG150 dataset, it is the old name chosen by Kaihua, and I am not using it anymore. You can replace it with VG150_train, VG150_val, and VG150_test.
I'm trying to run tools\detector_pretrain_net.py and I'm getting the followign error: E:\anaconda\envs\sgg\python.exe E:\learn_code\SGG-Benchmark-main\tools\detector_pretrain_net.py Traceback (most recent call last): File "E:\learn_code\SGG-Benchmark-main\tools\detector_pretrain_net.py", line 332, in
main()
File "E:\learn_code\SGG-Benchmark-main\tools\detector_pretrain_net.py", line 292, in main
cfg.merge_from_file(args.config_file)
File "E:\anaconda\envs\sgg\lib\site-packages\yacs\config.py", line 213, in merge_from_file
self.merge_from_other_cfg(cfg)
File "E:\anaconda\envs\sgg\lib\site-packages\yacs\config.py", line 217, in merge_from_other_cfg
_merge_a_into_b(cfg_other, self, self, [])
File "E:\anaconda\envs\sgg\lib\site-packages\yacs\config.py", line 478, in _merge_a_into_b
_merge_a_into_b(v, b[k], root, key_list + [k])
File "E:\anaconda\envs\sgg\lib\site-packages\yacs\config.py", line 478, in _merge_a_into_b
_merge_a_into_b(v, b[k], root, key_list + [k])
File "E:\anaconda\envs\sgg\lib\site-packages\yacs\config.py", line 491, in _merge_a_into_b
raise KeyError("Non-existent config key: {}".format(full_key))
KeyError: 'Non-existent config key: MODEL.BACKBONE.CONV_BODY'
It seems that in sgg_benchmark/config/defaults.py there is not definition of MODEL.BACKBONE.CONV_BODY and that might be causing the erorr. Do you know how can i solve it ?