Closed yanghaibin-cool closed 2 years ago
Hello.
The problem is with the way you specified the path. The path to the dataset should not be provided through the command line arguments. The program loads it from the configuration file for which you provide the path (as you properly did). The argument --test-dataset
accepts the key by which it accesses additional information.
Here is an example of the processing.
You provide a key, assume MOT17_DPM
for testing, i.e., --test-dataset MOT17_DPM
. (By the way, this value is the default and you should leave it as it is for your purposes. You will see the reason why in a while.)
On this line, the load_dataset_anno
is called:
dataset_key = args.test_dataset
dataset, modality = load_dataset_anno(cfg, dataset_key, args.set)
Now, when we dive into the load_dataset_anno
function, the first line of code loads the additional information I was talking about, specifically:
dataset_folder, anno_file, split_file, modality = dataset_maps[dataset_key]
There are multiple dataset_maps
manually specified in this source file. In case of the aforementioned MOT17_DPM
, the split is as follows:
dataset_maps['MOT17_DPM'] = ['MOT17',
'anno.json',
'splits_DPM.json',
'video']
Here you can see the folder name for the dataset (make sure it contains two subfolders, annotation and raw data, but as I see your command, it probably does not), then file name for the annotations specification, then file name for the splits (i.e., train or test), and the modality (practically, whether it is video-based or image-based dataset).
So, in our case, you have to modify the configuration file to the following:
DATASETS:
ROOT_DIR: "/media/yhb/Data/BaiduNetdiskDownload/MOT challenge/"
One more thing. I have already pointed out that your dataset is probably not in the right format. Here is the directory tree of the dataset you should also have because the program expects it to be this way:
MOT17/
├── annotation
└── raw_data
├── test
│ ├── MOT17-01-DPM
│ ├── MOT17-01-FRCNN
│ ├── MOT17-01-SDP
│ ├── MOT17-03-DPM
│ ├── MOT17-03-FRCNN
│ ├── MOT17-03-SDP
│ ├── MOT17-06-DPM
│ ├── MOT17-06-FRCNN
│ ├── MOT17-06-SDP
│ ├── MOT17-07-DPM
│ ├── MOT17-07-FRCNN
│ ├── MOT17-07-SDP
│ ├── MOT17-08-DPM
│ ├── MOT17-08-FRCNN
│ ├── MOT17-08-SDP
│ ├── MOT17-12-DPM
│ ├── MOT17-12-FRCNN
│ ├── MOT17-12-SDP
│ ├── MOT17-14-DPM
│ ├── MOT17-14-FRCNN
│ └── MOT17-14-SDP
└── train
├── MOT17-02-DPM
├── MOT17-02-FRCNN
├── MOT17-02-SDP
├── MOT17-04-DPM
├── MOT17-04-FRCNN
├── MOT17-04-SDP
├── MOT17-05-DPM
├── MOT17-05-FRCNN
├── MOT17-05-SDP
├── MOT17-09-DPM
├── MOT17-09-FRCNN
├── MOT17-09-SDP
├── MOT17-10-DPM
├── MOT17-10-FRCNN
├── MOT17-10-SDP
├── MOT17-11-DPM
├── MOT17-11-FRCNN
├── MOT17-11-SDP
├── MOT17-13-DPM
├── MOT17-13-FRCNN
└── MOT17-13-SDP
I suggest you read more in the documentation part dedicated to dataset preparation, especially data ingestion. The authors provide already ingested files for you to download. Once you have reorganized your directory structure, the annotation
folder should contain the following files:
MOT17/annotation/
├── anno.json
├── anno_pub_detection.json
├── splits.json
└── splits_DPM.json
If you have more questions, feel free to ask. I am also starting with this architecture, thus, also learning. Good luck to both of us.
Hello.
The problem is with the way you specified the path. The path to the dataset should not be provided through the command line arguments. The program loads it from the configuration file for which you provide the path (as you properly did). The argument
--test-dataset
accepts the key by which it accesses additional information.Here is an example of the processing.
You provide a key, assume
MOT17_DPM
for testing, i.e.,--test-dataset MOT17_DPM
. (By the way, this value is the default and you should leave it as it is for your purposes. You will see the reason why in a while.)On this line, the
load_dataset_anno
is called:dataset_key = args.test_dataset dataset, modality = load_dataset_anno(cfg, dataset_key, args.set)
Now, when we dive into the
load_dataset_anno
function, the first line of code loads the additional information I was talking about, specifically:dataset_folder, anno_file, split_file, modality = dataset_maps[dataset_key]
There are multiple
dataset_maps
manually specified in this source file. In case of the aforementionedMOT17_DPM
, the split is as follows:dataset_maps['MOT17_DPM'] = ['MOT17', 'anno.json', 'splits_DPM.json', 'video']
Here you can see the folder name for the dataset (make sure it contains two subfolders, annotation and raw data, but as I see your command, it probably does not), then file name for the annotations specification, then file name for the splits (i.e., train or test), and the modality (practically, whether it is video-based or image-based dataset).
So, in our case, you have to modify the configuration file to the following:
DATASETS: ROOT_DIR: "/media/yhb/Data/BaiduNetdiskDownload/MOT challenge/"
One more thing. I have already pointed out that your dataset is probably not in the right format. Here is the directory tree of the dataset you should also have because the program expects it to be this way:
MOT17/ ├── annotation └── raw_data ├── test │ ├── MOT17-01-DPM │ ├── MOT17-01-FRCNN │ ├── MOT17-01-SDP │ ├── MOT17-03-DPM │ ├── MOT17-03-FRCNN │ ├── MOT17-03-SDP │ ├── MOT17-06-DPM │ ├── MOT17-06-FRCNN │ ├── MOT17-06-SDP │ ├── MOT17-07-DPM │ ├── MOT17-07-FRCNN │ ├── MOT17-07-SDP │ ├── MOT17-08-DPM │ ├── MOT17-08-FRCNN │ ├── MOT17-08-SDP │ ├── MOT17-12-DPM │ ├── MOT17-12-FRCNN │ ├── MOT17-12-SDP │ ├── MOT17-14-DPM │ ├── MOT17-14-FRCNN │ └── MOT17-14-SDP └── train ├── MOT17-02-DPM ├── MOT17-02-FRCNN ├── MOT17-02-SDP ├── MOT17-04-DPM ├── MOT17-04-FRCNN ├── MOT17-04-SDP ├── MOT17-05-DPM ├── MOT17-05-FRCNN ├── MOT17-05-SDP ├── MOT17-09-DPM ├── MOT17-09-FRCNN ├── MOT17-09-SDP ├── MOT17-10-DPM ├── MOT17-10-FRCNN ├── MOT17-10-SDP ├── MOT17-11-DPM ├── MOT17-11-FRCNN ├── MOT17-11-SDP ├── MOT17-13-DPM ├── MOT17-13-FRCNN └── MOT17-13-SDP
I suggest you read more in the documentation part dedicated to dataset preparation, especially data ingestion. The authors provide already ingested files for you to download. Once you have reorganized your directory structure, the
annotation
folder should contain the following files:MOT17/annotation/ ├── anno.json ├── anno_pub_detection.json ├── splits.json └── splits_DPM.json
If you have more questions, feel free to ask. I am also starting with this architecture, thus, also learning. Good luck to both of us.
With your help, I succeeded at last! Thank you so much for your guidance! This is undoubtedly a great help to beginners like me.
Hello. I encountered this error when testing with the following command,“python3 -m tools.test_net --config-file configs/dla/DLA_34_FPN_EMM_MOT17.yaml --output-dir /home/yhb/下载/track_results --model-file /home/yhb/下载/DLA-34-FPN_EMM_crowdhuman_mot17 --test-dataset /media/yhb/Data/BaiduNetdiskDownload/MOT challenge/MOT17/test” error:test_net.py: error: unrecognized arguments: challenge/MOT17/test --test-path is the test path in MOT17 downloaded from the official website has not been modified. What changes do I need to make to --test-path? It is very helpful for beginners. Thank you very much!