When the test was completed and evaluation started, the program encountered the following error. I checked the config file and found that the fsod dataset does not define deveit directoy. Can you evaluate the test results normally at runtime? Or do I need to add other parameters to the command?
INFO test_engine.py: 222: Wrote detections to: /data_raid5/mzhe/fsod/FSOD-code/Outputs/fsod_save_dir/test/detections.pkl
INFO test_engine.py: 172: Total inference time: 8966.570s
INFO task_evaluation.py: 75: Evaluating detections
Traceback (most recent call last):
File "tools/test_net.py", line 131, in
check_expected_results=True)
File "/data_raid5/mzhe/fsod/FSOD-code/lib/core/test_engine.py", line 130, in run_inference
all_results = result_getter()
File "/data_raid5/mzhe/fsod/FSOD-code/lib/core/test_engine.py", line 110, in result_getter
multi_gpu=multi_gpu_testing
File "/data_raid5/mzhe/fsod/FSOD-code/lib/core/test_engine.py", line 174, in test_net_on_dataset
dataset, all_boxes, all_segms, all_keyps, output_dir
File "/data_raid5/mzhe/fsod/FSOD-code/lib/datasets/task_evaluation.py", line 59, in evaluate_all
dataset, all_boxes, output_dir, use_matlab=use_matlab
File "/data_raid5/mzhe/fsod/FSOD-code/lib/datasets/task_evaluation.py", line 80, in evaluate_boxes
dataset, all_boxes, output_dir, use_matlab=use_matlab)
File "/data_raid5/mzhe/fsod/FSOD-code/lib/datasets/voc_dataset_evaluator.py", line 48, in evaluate_boxes
filenames = _write_voc_results_files(json_dataset, all_boxes, salt)
File "/data_raid5/mzhe/fsod/FSOD-code/lib/datasets/voc_dataset_evaluator.py", line 61, in _write_voc_results_files
image_set_path = voc_info(json_dataset)['image_set_path']
File "/data_raid5/mzhe/fsod/FSOD-code/lib/datasets/voc_dataset_evaluator.py", line 239, in voc_info
devkit_path = DATASETS[json_dataset.name][DEVKIT_DIR]
KeyError: 'devkit_directory'
When the test was completed and evaluation started, the program encountered the following error. I checked the config file and found that the fsod dataset does not define deveit directoy. Can you evaluate the test results normally at runtime? Or do I need to add other parameters to the command?