Zhongdao / Towards-Realtime-MOT

Joint Detection and Embedding for fast multi-object tracking
MIT License
2.37k stars 539 forks source link

AssertionError in create_grids #194

Closed frontseat-astronaut closed 3 years ago

frontseat-astronaut commented 3 years ago

Hi I am trying to train on my custom dataset. The shape of the images is (864x480). When I run the train.py file (with config file specified as cfg/yolov3_864x480.cfg), I get the following assertion error:

Traceback (most recent call last):
  File "train.py", line 218, in <module>
    opt=opt,
  File "train.py", line 128, in train
    loss, components = model(imgs.cuda(), targets.cuda(), targets_len.cuda())
  File "/home/datalab/anaconda3/envs/clevr/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/datalab/anaconda3/envs/clevr/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/home/datalab/anaconda3/envs/clevr/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/datalab/clevr/Towards-Realtime-MOT/models.py", line 263, in forward
    x, *losses = module[0](x, self.img_size, targets, self.classifier)
  File "/home/datalab/anaconda3/envs/clevr/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/datalab/clevr/Towards-Realtime-MOT/models.py", line 144, in forward
    create_grids(self, img_size, nGh, nGw)
  File "/home/datalab/clevr/Towards-Realtime-MOT/models.py", line 295, in create_grids
    "{} v.s. {}/{}".format(self.stride, img_size[1], nGh)
AssertionError: 25.41176470588235 v.s. 480/19
fanshu4869 commented 3 years ago

models.py,line 293and294,add "int"before img_size like this int(img_size[0]/nGw)

frontseat-astronaut commented 3 years ago

Yeah, thanks!

ChangJay0212 commented 3 years ago

Yeah, thanks!

i have same issue , this method doesn't work!

mheriyanto commented 3 years ago

Yeah, thanks!

i have same issue , this method doesn't work!

I have same issue too, I have tried changing L293 and L294 become like this:

self.stride = int(img_size[0]/nGw)
assert self.stride == int(img_size[1]/nGh), "{} v.s. {}/{}".format(self.stride, img_size[1], nGh)

but it still doesn't work too. Is there any suggestion?

mheriyanto commented 3 years ago

I have solved this problem that make sure imgs.cuda() size is torch.Size([BATCH_SIZE, 3, 480, 864]). To get the size can use command imgs.cuda().size() or imgs.cuda().shape. And I also used adjust above.