Open liuyvchi opened 3 years ago
Does anyone meet the issue that the IDF1 score is larger than 100%?
Can you provide the detailed output and evaluation results? One possible reason is that you run the post_processing.py for multiple times, and it will always add data to the output file instead of overwriting the last result each time.
Does anyone meet the issue that the IDF1 score is larger than 100%?and detection result are zero when recreate a file to save the result of running post_processing.py?
Does anyone meet the issue that the IDF1 score is larger than 100%?and detection result are zero when recreate a file to save the result of running post_processing.py?
You can first check the tracking output and make some visualization for quick check. There are multiple steps in the main script, you need find out which step went wrong.
I have a guess:gt.txt file and similar mot17-02-DPm.txt file could not be found when I ran the code before. Then I used the MOT17 gt.txt file and det.txt file as the above files. I would like to ask whether this is the reason for this result. So how are these two files generated?When I delete a file like mot17-02-DPm.txt and run the program again, I get the following error. I only captured the error part because the run result is long.
------------- Evaluation -------------
python mot_metric_evaluation.py --out_mot_files_path ../../dataset/MOT17/results_reid_with_traindata/tracking_output/ --gt_path ../../dataset/MOT17/train/
Traceback (most recent call last):
File "mot_metric_evaluation.py", line 101, in
I have a guess:gt.txt file and similar mot17-02-DPm.txt file could not be found when I ran the code before. Then I used the MOT17 gt.txt file and det.txt file as the above files. I would like to ask whether this is the reason for this result. So how are these two files generated?When I delete a file like mot17-02-DPm.txt and run the program again, I get the following error. I only captured the error part because the run result is long.
------------- Evaluation ------------- python mot_metric_evaluation.py --out_mot_files_path ../../dataset/MOT17/results_reid_with_traindata/tracking_output/ --gt_path ../../dataset/MOT17/train/ Traceback (most recent call last): File "mot_metric_evaluation.py", line 101, in main() File "mot_metric_evaluation.py", line 99, in main summary = compute_mot_metrics(args.gt_path, args.out_mot_files_path, seqs, print_results = True) File "mot_metric_evaluation.py", line 72, in compute_mot_metrics ts = OrderedDict([(os.path.splitext(Path(f).parts[-1])[0], mm.io.loadtxt(f, fmt='mot15-2D')) for f in tsfiles]) File "mot_metric_evaluation.py", line 72, in ts = OrderedDict([(os.path.splitext(Path(f).parts[-1])[0], mm.io.loadtxt(f, fmt='mot15-2D')) for f in tsfiles]) File "D:\anaconda\anaconda\envs\LP\lib\site-packages\motmetrics\io.py", line 321, in loadtxt return func(fname, *kwargs) File "D:\anaconda\anaconda\envs\LP\lib\site-packages\motmetrics\io.py", line 83, in load_motchallenge engine='python' File "D:\anaconda\anaconda\envs\LP\lib\site-packages\pandas\util_decorators.py", line 311, in wrapper return func(args, kwargs) File "D:\anaconda\anaconda\envs\LP\lib\site-packages\pandas\io\parsers\readers.py", line 586, in read_csv return _read(filepath_or_buffer, kwds) File "D:\anaconda\anaconda\envs\LP\lib\site-packages\pandas\io\parsers\readers.py", line 482, in _read parser = TextFileReader(filepath_or_buffer, kwds) File "D:\anaconda\anaconda\envs\LP\lib\site-packages\pandas\io\parsers\readers.py", line 811, in init self._engine = self._make_engine(self.engine) File "D:\anaconda\anaconda\envs\LP\lib\site-packages\pandas\io\parsers\readers.py", line 1040, in _make_engine return mapping[engine](self.f, self.options) # type: ignore[call-arg] File "D:\anaconda\anaconda\envs\LP\lib\site-packages\pandas\io\parsers\python_parser.py", line 96, in init** self._open_handles(f, kwds) File "D:\anaconda\anaconda\envs\LP\lib\site-packages\pandas\io\parsers\base_parser.py", line 229, in _open_handles errors=kwds.get("encoding_errors", "strict"), File "D:\anaconda\anaconda\envs\LP\lib\site-packages\pandas\io\common.py", line 707, in get_handle newline="", FileNotFoundError: [Errno 2] No such file or directory: '../../dataset/MOT17/results_reid_with_traindata/tracking_output/MOT17-02-DPM.txt'
It is totally wrong. The evaluation should be based on the tracking output, not the detection results. You should follow the instruction to generate the tracking output and then do the evaluation.
Does this trace output refer to the run result of reid_feature_extraction.py? Or what part of the code?
Does this trace output refer to the run result of reid_feature_extraction.py? Or what part of the code?
Does this trace output refer to the run result of reid_feature_extraction.py? Or what part of the code?
You should run the main.sh, and got the tracking results in this path: ../../dataset/MOT17/results_reid_with_traindata/tracking_output/. The reference evaluation results are:
I ran it as written in readme.md, but there was no the result of the trace.But due to the computer memory, I deleted some files in dataset/MOT17/ results_reid_WITH_traindata /detection. Will this have any impact?
I ran it as written in readme.md, but there was no the result of the trace.But due to the computer memory, I deleted some files in dataset/MOT17/ results_reid_WITH_traindata /detection. Will this have any impact?
No, it won't have any impact on generating results for the left sequences.
So what might be the reason for not generating trace results?
So what might be the reason for not generating trace results?
I don't know because I cannot get enough information. I just rerun the main script, and there is no problem from my side.
Can you send me that tracking result you generated?I would also like to ask whether num_prediction refers to the number of IDS in that result?
Can you send me that tracking result you generated?I would also like to ask whether num_prediction refers to the number of IDS in that result?
I can upload the tracking results to Baidu if needed. IDs in the table is the number of ID switch.
Ok,thank you!
Does anyone meet the issue that the IDF1 score is larger than 100%?