open-mmlab / mmtracking

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
https://mmtracking.readthedocs.io/en/latest/
Apache License 2.0
3.54k stars 594 forks source link

How to use the CocoVideoDataset? #801

Open lijoe123 opened 1 year ago

lijoe123 commented 1 year ago

If i want to use the COCO Dataset, i need to set load_as_video=False?, and fill the CLASSES=(the class name), like the coco dataset name? image

dyhBUPT commented 1 year ago

Yes, you can also directly use the CocoDataset from mmdet.

lijoe123 commented 1 year ago

Thank you for your answer.

eatbreakfast111 commented 1 year ago

Yes, you can also directly use the CocoDataset from mmdet.

Oh, really! I can use the cocodataset from mmdet in mmtracking? Could you please tell me how to do it? Thank you

dyhBUPT commented 1 year ago

Yes, you can also directly use the CocoDataset from mmdet.

Oh, really! I can use the cocodataset from mmdet in mmtracking? Could you please tell me how to do it? Thank you

Yes, but it depends on your model and task. Please refer to #804.

eatbreakfast111 commented 1 year ago

Which model and task can use it?

dyhBUPT commented 1 year ago

If your model doesn't need the reference images, you can use CocoDataset, e.g, ByteTrack.

The data config of ByteTrack: https://github.com/open-mmlab/mmtracking/blob/b1679f990bce7abc18e852004d2f70b09904e238/configs/mot/bytetrack/bytetrack_yolox_x_crowdhuman_mot17-private-half.py#L82-L112

The forward_train of ByteTrack: https://github.com/open-mmlab/mmtracking/blob/b1679f990bce7abc18e852004d2f70b09904e238/mmtrack/models/mot/byte_track.py#L41-L43