Open a-pru opened 3 years ago
The error reminds you that the results
is a list rather than a dict. It means you send a list to the data pipeline.
In the testing of mot, we usually send a dict to the data pipeline by setting ref_img_sampler
in the CocoVideoDataset
to None
.
Therefore, you need to set ref_img_sampler
to None
as shown in here.
Thank you for your quick response, that indeed solved this problem. I'm now stuck with another problem, this time in regress_track() (tracktor_tracker.py).
File "mmtracking/tools/test.py", line 191, in <module>
main()
File "mmtracking/tools/test.py", line 160, in main
show_score_thr=args.show_score_thr)
File "/databricks/driver/mmtracking/mmtrack/apis/test.py", line 46, in single_gpu_test
result = model(return_loss=False, rescale=True, **data)
File "/databricks/conda/envs/databricks-ml-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/databricks/conda/envs/databricks-ml-gpu/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 42, in forward
return super().forward(*inputs, **kwargs)
File "/databricks/conda/envs/databricks-ml-gpu/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 159, in forward
return self.module(*inputs[0], **kwargs[0])
File "/databricks/conda/envs/databricks-ml-gpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/databricks/conda/envs/databricks-ml-gpu/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 97, in new_func
return old_func(*args, **kwargs)
File "/databricks/driver/mmtracking/mmtrack/models/mot/base.py", line 135, in forward
return self.forward_test(img, img_metas, **kwargs)
File "/databricks/driver/mmtracking/mmtrack/models/mot/base.py", line 112, in forward_test
return self.simple_test(imgs[0], img_metas[0], **kwargs)
File "/databricks/driver/mmtracking/mmtrack/models/mot/tracktor.py", line 145, in simple_test
**kwargs)
File "/databricks/conda/envs/databricks-ml-gpu/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 184, in new_func
return old_func(*args, **kwargs)
File "/databricks/driver/mmtracking/mmtrack/models/mot/trackers/tracktor_tracker.py", line 152, in track
feats, img_metas, model.detector, frame_id, rescale)
File "/databricks/driver/mmtracking/mmtrack/models/mot/trackers/tracktor_tracker.py", line 78, in regress_tracks
ids = ids[valid_inds]
IndexError: index 2 is out of bounds for dimension 0 with size 2
ids: tensor([0, 1])
valid_inds: tensor([0, 2, 3, 1], device='cuda:0')
As far as I understand this means that in the previous frame two objects are detected (with ids 0 and 1) and in the current frame four objects are detected. This should not lead to an error... or did I misunderstood something again?
Yes, the first dimension of bbox
and ids
are equal...
bboxes: torch.Size([2, 4])
ids: torch.Size([2])
Then, there is something wrong with the multiclass_nms()
. You can check inside the function.
If I return keep
instead of in inds[keep]
in multiclass_nms()
(Line 93) the tracking with Tracktor works reasonable fine.
But for me it's still unclear whether if this is a bug or if I do something wrong?
What is the score_thr
in multiclass_nms()
?
Originally, it is 0
- see the function call here
But this leads to errors when an image contains very low scoring bounding boxes. Using self.regression['obj_score_thr']
solved this issue for me.
So to solve my issue I did the following:
(1) multiclass_nms() (Line 93): changed inds[keep]
to keep
(2) regress_tracks() (Line 75): changed 0
to self.regression['obj_score_thr']
Without these two changes I was not able to run Tracktor on my custom dataset where e.g. also low scoring bounding boxes are included.
The inds[keep]
cann't be changed to keep
since it may introduce some id switches.
Could you only try to change 0
to self.regression['obj_score_thr']
in order to filter low-scoring boxes to see whether the tracktor works well?
The
inds[keep]
cann't be changed tokeep
since it may introduce some id switches. Could you only try to change0
toself.regression['obj_score_thr']
in order to filter low-scoring boxes to see whether the tracktor works well?
Hi GT, I also met this problem and I believe it's a bug in /mmtrack/models/mot/trackers/tracktor_tracker.py. It only occurs while tracking multi-class targets. Here is an example. Suppose we want to track two classes of targets, pedestrian & car . And in this video:tv:, only three cars are recorded. Before multiclass_nms, the track_bboxes[0] might be a tensor with 3 rows (3 bboxs) and 8 columns (bboxes for predictions of 2 classes) and the ids might be a tensor with 3 components.
ids = tensor([0,1,2])
In [def multiclass_nms](), bboxes of all classes are rearranged and reshaped. Therefore, after performing multiclass_nms, the valid_inds will be tensor([1, 3, 5]) which will obviously be indices out of range. Best regards, Po
Hi Po, thanks for you explanation, I'm indeed also dealing with a multi-class tracking problem. I also agree with your findings.
So valid_mask
is an array with shape [num_bboxes*num_classes x 1]
and structure [bbox1_class1, bbox1_class2, bbox2_class1, ...]
, hence the values in inds
are in range [0, num_bboxes*num_classes]
.
To map now valid_inds
to the indices of the detected bounding boxes one could add valid_inds = torch.floor_divide(valid_inds, num_classes)
before line 78 in regress_tracks()
?
Hi Po, thanks for you explanation, I'm indeed also dealing with a multi-class tracking problem. I also agree with your findings.
So
valid_mask
is an array with shape[num_bboxes*num_classes x 1]
and structure[bbox1_class1, bbox1_class2, bbox2_class1, ...]
, hence the values ininds
are in range[0, num_bboxes*num_classes]
.To map now
valid_inds
to the indices of the detected bounding boxes one could addvalid_inds = torch.floor_divide(valid_inds, num_classes)
before line 78 inregress_tracks()
?
Totally agree with you, I use similar way.
Hi, @TheRealPoseidon . There indeed is a bug in tracktor_tracker when tracking multi-class targets. The reason is that one proposal may generate multi detection boxes after detector and NMS, as you pointed out.
@a-pru @TheRealPoseidon
Adding valid_inds = torch.floor_divide(valid_inds, num_classes)
before line 78 in regress_tracks() is a solution to skip the bug. However, this behavior may introduce multi-objects with the same id in the current frame, if more than 2 boxes, which belong to the same proposal, are kept after NMS. Therefore, you need to pick only one box for the repeated id, if this happens.
hi,could you tell me how to make custom COCOVideo?what is the dataset structure,only jpgs of frame and json?or have snippe videos like vid?
hi,could you tell me how to make custom COCOVideo?what is the dataset structure,only jpgs of frame and json?or have snippe videos like vid?
Hi guy, did you finish your training with your customedd data?
Hi,
I'm trying to use MMTrack on a custom dataset organized as COCOVideo Dataset as shown in the documentation. But when running the tools/test.py script I get an error because the "results" variable in "/mmdetection/mmdet/datasets/pipelines/loading.py" is a string instead of a dict -> loading the dataset somehow fails...
Error traceback:
Dataset yaml (only a shortened version):
I tried to use Tracktor with a standard ReID network and a custom detector which I trained previously using MMDet. Config:
Environment (I also tested using the most recent commit on the main branch of mmtrack - same problem):
Is there maybe an error in my config/dataset yaml? Or could there be a bug in the code? Any help is much appreciated, thank you!
Kind regards, Alexander