Zhongdao / UniTrack

[NeurIPS'21] Unified tracking framework with a single appearance model. It supports Single Object Tracking (SOT), Video Object Segmentation (VOS), Multi-Object Tracking (MOT), Multi-Object Tracking and Segmentation (MOTS), Pose Tracking, Video Instance Segmentation (VIS), and class-agnostic MOT (e.g. TAO dataset).
MIT License
338 stars 34 forks source link

Problem with the Unitrack + YOLOX Demo #18

Open hj91k opened 2 years ago

hj91k commented 2 years ago

Hello, i will hope to run Unitrack + YOLOX Demo for realtime.

so i ran Unitrack + Yolox Demo with the "webcam" option but It seems to only work on a test-video basis.

how can i run the Demo with webcam?

Zhongdao commented 2 years ago

Hi, It is not difficult to revise the code so that it can support webcam demo. You need to write a webcam dataloader and replace it with the original LoadVideo dataloader in this line.

hj91k commented 2 years ago

Thanks for your advised(@Zhongdao), I succeeded the webcam version. so I tried to MOTS demo with webcam by changing eval_seq() of MOTS instead eval_seq() of MOT

But it failed with an error message('numpy.ndarray' object has no attribute 'unsqueeze') how should i solve this?

Zhongdao commented 2 years ago

Unfortunately, if you want to try MOTS with webcam, the YOLOX demo cannot be directly used, because YOLOX only outputs bounding boxes rather than pixel-wise masks. I think you can try to find a fast instance segmentation model and replace it with YOLOX in the mot demo.