open-mmlab / mmtracking

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
https://mmtracking.readthedocs.io/en/latest/
Apache License 2.0
3.52k stars 591 forks source link

How to use get checkpoint/deploy/serve model for DeepSort/Tracktor #512

Open castrovictor opened 2 years ago

castrovictor commented 2 years ago

Hello, I have a question. How can I use DeepSort apart from executing tools/test.py? For example mmdetector gives a checkpoint that can be converted to TensorRT. Asking because here it seems that we need a checkpoint, but I do not know what checkpoint Deepsort produces

JingweiZhang12 commented 2 years ago

The checkpoints of DeepSORT are composed of a detector checkpoint and a reid checkpoint. We use them by wrapping them into the init_cfg in the Deepsort config. Please refer to this: https://github.com/open-mmlab/mmtracking/blob/master/configs/mot/deepsort/deepsort_faster-rcnn_fpn_4e_mot17-private-half.py#L14

castrovictor commented 2 years ago

The checkpoints of DeepSORT are composed of a detector checkpoint and a reid checkpoint. We use them by wrapping them into the init_cfg in the Deepsort config. Please refer to this: https://github.com/open-mmlab/mmtracking/blob/master/configs/mot/deepsort/deepsort_faster-rcnn_fpn_4e_mot17-private-half.py#L14

Thanks for your response. I saw that and I was able to make it work with different checkpoints. My question is regarding using/deploying the model. Let's say I have a video and I want to run DeepSort on it. How do I do that? I saw model serving in the docs, but the script ask for "checkpoint". That's why I am confused.

Edit: I found how to feed it with a video, for some reason I did not see it yesterday. But I am still interested in knowing if it is possible to serve DeepSort with PyTorchServe as described in the link above.