Open NikoRohr opened 2 years ago
Are you sure the ground truth boxes loaded from the file are in the right format? You can check it in prepare_data
function in CocoVideoDataset
.
Are you sure the ground truth boxes loaded from the file are in the right format? You can check it in
prepare_data
function inCocoVideoDataset
.
It seems that everything is correctly loaded. I added:
with open('prepare_data_results.txt', 'a+') as f:
print(results, file=f)
The resulting file is 47MB, so I extracted two samples:
{'img_info': {'id': 1, 'video_id': 1, 'file_name': 'EQ0A4452/img1/000001.jpg', 'height': 1080, 'width': 1920, 'frame_id': 0, 'mot_frame_id': 1, 'filename': 'EQ0A4452/img1/000001.jpg'}, 'img_prefix': '/home/sim/data/penguins/train', 'seg_prefix': None, 'proposal_file': None, 'bbox_fields': [], 'mask_fields': [], 'seg_fields': [], 'is_video_data': True, 'detections': [array([[1.125e+03, 7.500e+02, 1.175e+03, 7.950e+02, 1.000e+00],
[4.930e+02, 6.800e+02, 5.240e+02, 6.960e+02, 1.000e+00],
[1.687e+03, 7.660e+02, 1.735e+03, 8.120e+02, 1.000e+00],
[6.080e+02, 6.750e+02, 6.400e+02, 7.260e+02, 1.000e+00],
[9.090e+02, 6.610e+02, 9.380e+02, 7.100e+02, 1.000e+00],
[5.460e+02, 6.850e+02, 5.800e+02, 7.190e+02, 1.000e+00],
[2.740e+02, 7.440e+02, 3.400e+02, 7.860e+02, 1.000e+00],
[5.710e+02, 6.800e+02, 6.070e+02, 7.090e+02, 1.000e+00],
[3.960e+02, 6.760e+02, 4.380e+02, 7.470e+02, 1.000e+00],
[3.990e+02, 6.740e+02, 4.290e+02, 7.070e+02, 1.000e+00],
[4.680e+02, 6.640e+02, 4.920e+02, 6.950e+02, 1.000e+00],
[3.730e+02, 6.710e+02, 3.930e+02, 6.990e+02, 1.000e+00],
[7.710e+02, 6.890e+02, 8.080e+02, 7.180e+02, 1.000e+00],
[3.800e+02, 6.580e+02, 3.950e+02, 6.820e+02, 1.000e+00],
[4.770e+02, 7.090e+02, 5.190e+02, 7.820e+02, 1.000e+00],
[3.850e+02, 6.740e+02, 4.040e+02, 7.000e+02, 1.000e+00]])]}
and
{'img_info': {'id': 12995, 'video_id': 1, 'file_name': 'EQ0A4452/img1/012995.jpg', 'height': 1080, 'width': 1920, 'frame_id': 12994, 'mot_frame_id': 12995, 'filename': 'EQ0A4452/img1/012995.jpg'}, 'img_prefix': '/home/sim/data/penguins/train', 'seg_prefix': None, 'proposal_file': None, 'bbox_fields': [], 'mask_fields': [], 'seg_fields': [], 'is_video_data': True, 'detections': [array([[7.020e+02, 6.830e+02, 7.420e+02, 7.440e+02, 1.000e+00],
[2.670e+02, 7.330e+02, 3.410e+02, 7.860e+02, 1.000e+00],
[4.680e+02, 6.580e+02, 4.950e+02, 6.950e+02, 1.000e+00],
[2.940e+02, 7.060e+02, 3.370e+02, 7.560e+02, 1.000e+00],
[7.690e+02, 6.920e+02, 8.150e+02, 7.200e+02, 1.000e+00],
[1.233e+03, 7.370e+02, 1.300e+03, 8.230e+02, 1.000e+00],
[4.380e+02, 6.660e+02, 4.700e+02, 7.030e+02, 1.000e+00],
[3.890e+02, 6.830e+02, 4.100e+02, 7.030e+02, 1.000e+00],
[3.880e+02, 6.600e+02, 4.120e+02, 6.930e+02, 1.000e+00],
[7.240e+02, 1.026e+03, 8.370e+02, 1.080e+03, 0.000e+00],
[3.820e+02, 6.610e+02, 3.930e+02, 6.830e+02, 1.000e+00],
[1.670e+03, 7.580e+02, 1.731e+03, 8.120e+02, 1.000e+00],
[5.250e+02, 6.690e+02, 5.580e+02, 7.110e+02, 1.000e+00],
[6.730e+02, 6.790e+02, 7.060e+02, 7.210e+02, 1.000e+00],
[3.670e+02, 6.700e+02, 3.950e+02, 7.020e+02, 1.000e+00],
[1.650e+02, 6.750e+02, 2.080e+02, 7.170e+02, 1.000e+00],
[8.990e+02, 6.530e+02, 9.410e+02, 7.130e+02, 1.000e+00],
[4.760e+02, 7.060e+02, 5.190e+02, 7.780e+02, 1.000e+00],
[3.870e+02, 6.730e+02, 4.360e+02, 7.530e+02, 1.000e+00],
[4.930e+02, 6.790e+02, 5.230e+02, 6.950e+02, 1.000e+00],
[5.620e+02, 6.930e+02, 5.850e+02, 7.140e+02, 0.000e+00]])]}
Is it possible that I have to change test.pipline.img_scale to my actual video size (1080,1920)?
The image_scale
has little influence on the performance. I guess it's the fault of CLASSES
which you forget to change it to the true class you want to evaluate.
https://github.com/open-mmlab/mmtracking/blob/master/mmtrack/datasets/mot_challenge_dataset.py#L504
I changed it in mot_challenge_dataset.py
but now I get the error:
Traceback (most recent call last):
File "../mmtracking/tools/test.py", line 225, in <module>
main()
File "../mmtracking/tools/test.py", line 215, in main
metric = dataset.evaluate(outputs, **eval_kwargs)
File "/home/sim/mmtracking/mmtrack/datasets/mot_challenge_dataset.py", line 459, in evaluate
dataset = [trackeval.datasets.MotChallenge2DBox(dataset_config)]
File "/home/sim/env_mmtracking/lib/python3.8/site-packages/trackeval/datasets/mo
t_challenge_2d_box.py", line 75, in __init__
raise TrackEvalException('Attempted to evaluate an invalid class. Only pedestrian class is valid.')
trackeval.utils.TrackEvalException: Attempted to evaluate an invalid class. Only pedestrian class is valid.
How to change the class to evaluate?
File "/home/sim/env_mmtracking/lib/python3.8/site-packages/trackeval/datasets/mo t_challenge_2d_box.py",
.
You installed the mmtracking
not on editing mode. Do you reinstall mmtracking
after modification?
Suggest to install mmtracking
on editing mode if you want to modify the code.
The
image_scale
has little influence on the performance. I guess it's the fault ofCLASSES
which you forget to change it to the true class you want to evaluate. https://github.com/open-mmlab/mmtracking/blob/master/mmtrack/datasets/mot_challenge_dataset.py#L504
I dont think that is the problem. I changed my dataset to have just the class pedestrian, so I dont have to modify any code of mmtracking or trackeval. Still the recall is like before. I also mentioned that it is not zero there are 399 true positives and I really dont know why just 399. I also did a param search with:
model = dict(
detector=dict(
... # not important because of the usage of public detections
)
type='DeepSORT',
motion=dict(type='KalmanFilter', center_only=False),
tracker=dict(
type='SortTracker',
obj_score_thr=0.5,
match_iou_thr=0.5,
reid=None,
num_tentatives=[1, 100],
num_frames_retain=[10, 1000, 10000]))
Still the results are the same for each run.
I believe there must be a mistake in the config of sort because in the resulting csv file there are just 5057 non empty entries for det_bboxes
and 399 for track_bboxes
.
Hey, I am still working on this issue. I tried another SORT implementation with the exact same detection files and getting reasonable results. So it has to be something with the config file or the implementation itself.
I would really appreciate if someone can explain the reason causing this behavior.
Are there any thresholds specifying ratio, width or height of considered bboxes? I have small bboxes with width in maximum 250 and height 130. Compared to the MOT17 dataset it is much less. Maybe this can explain the behavior?
Hi, I just want to run SORT with a custom Dataset, by using the ground truth data as public detection results. So I created an train_det.pkl file which is filled with the ground truth boxes without the track_id and with conf 1. When I run test.py with
no errors occure but the evaluation results are:
I think that is strange because there shouldn't be 487570 false negatives. I would really appreciate some help to fix this problem. Thanks in advance and for completeness my config: