Zhongdao / Towards-Realtime-MOT

Joint Detection and Embedding for fast multi-object tracking
MIT License
2.37k stars 539 forks source link

Division by zero error when running demo #230

Open amitgalor18 opened 2 years ago

amitgalor18 commented 2 years ago

I finally managed to get the docker image run on windows. When I tried to run demo.py in the container I got this error:

`root@9354e09d581a:/Towards-Realtime-MOT# python /Towards-Realtime-MOT/demo.py --input-video /Towards-Realtime-MOT/raw/MOT16-01-raw.gif --weights /Towards-Realtime-MOT/cfg/darknet53.conv.74 --output-format video --output-root /Towards-Realtime-MOT/ Namespace(cfg='cfg/yolov3_1088x608.cfg', conf_thres=0.5, input_video='/Towards-Realtime-MOT/raw/MOT16-01-raw.gif', iou_thres=0.5, min_box_area=200, nms_thres=0.4, output_format='video', output_root='/Towards-Realtime-MOT/', track_buffer=30, weights='/Towards-Realtime-MOT/cfg/darknet53.conv.74')

2021-08-08 20:53:24 [INFO]: Starting tracking... Traceback (most recent call last): File "/Towards-Realtime-MOT/demo.py", line 84, in track(opt) File "/Towards-Realtime-MOT/demo.py", line 52, in track dataloader = datasets.LoadVideo(opt.input_video, opt.img_size) File "/Towards-Realtime-MOT/utils/datasets.py", line 94, in init self.w, self.h = self.get_size(self.vw, self.vh, self.width, self.height) File "/Towards-Realtime-MOT/utils/datasets.py", line 98, in get_size wa, ha = float(dw) / vw, float(dh) / vh ZeroDivisionError: float division by zero `

It happened no matter what video input I chose (I tried both gif and webm formats) and with a few trained models (e.g. darknet53.conv.74 that was in the link here, or jde_darknet53_30e_1088x608.pdparams that was in the baidu repo, I also tried inputting one of the cfg files but now I'm pretty sure it was nonsense). What else am I doing wrong?

Zhongdao commented 2 years ago

It seems the video is not properly loaded, could you try .mp4 or .avi videos?

amitgalor18 commented 2 years ago

I tried it now with mp4 and with avi. Still the same error. Is there any other specification for the video input? I used videos from the MOT16 dataset so I wouldn't think that's the problem, but maybe there's some preprocess that I missed. Is there any specifications for matching the weights file to the cfg architecture?

The only other thing I can think about is that there could be a problem with the docker build (for example, it seems that the image doesn't have the required "lap" package and I reinstall it every time I restart the container)

Zhongdao commented 2 years ago

@amitgalor18 There is no other specification for the video format, as long as your video can be loaded by opencv. I guess probably it's an issue on opencv/ffmpeg. Could you try load a video using python opencv?

siutin commented 2 years ago

I managed to fix this by changing the docker image tag to "1.10.0-cuda11.3-cudnn8-devel"