duanzhiihao / RAPiD

RAPiD: Rotation-Aware People Detection in Overhead Fisheye Images (CVPR 2020 Workshops)
http://vip.bu.edu/rapid/
Other
213 stars 63 forks source link

Tensors not on the same device #35

Open drjoeCV opened 2 years ago

drjoeCV commented 2 years ago

I'm getting the RuntimeError: Expected all tensors to be on the same device error in several locations:

iou_mask.py line 340 boxes1[:,0] = *w
rapid.py line 221 norm_anch_wh = anchors[:,0:2] / imp_hw

and more

duanzhiihao commented 2 years ago

Hi, your information is not sufficient for me to locate the problem. Which python file are you executing? Can you provide your command line args?

drjoeCV commented 2 years ago

following the README, I'm running $ python train.py --model rapid_pL1 --dataset COCO --batch_size 4 I successfully ran the example.py on both the CPU and GPU

I'm on torch version 1.10.1

BTW, that remind me of another problem. the readme says to use --model rapid_L1 but its supposed to be --model rapid_pL1 I added an assert into train.py to catch if the model string is not on of the supported pL1 or pL2

duanzhiihao commented 2 years ago

I see.

  1. The code works on PyTorch 1.6 or something. But at some version update, PyTorch makes tensor.shape return torch.Size, where previously it returns tuple. This update breaks my code. I will make an update sometime tomorrow to make it compatible with PyTorch 1.10.
  2. I updated readme.
drjoeCV commented 2 years ago

Thanks. That is the problem, its returning a torch.Size instead of a tuple or even a tensor. You have a isinstance(size, Tuple) that you need to change in iou_mask.py and in rapid.py you need to change imp_hw from being a torch.Size to a torch.Tensor that is ALSO on the same device as the anchors.