roboflow / zero-shot-object-tracking

Object tracking implemented with the Roboflow Inference API, DeepSort, and OpenAI CLIP.
https://blog.roboflow.com/zero-shot-object-tracking/
GNU General Public License v3.0
357 stars 62 forks source link

When I inference with single class, e.g.,person, got this error #18

Open zxq309 opened 2 years ago

zxq309 commented 2 years ago

Traceback (most recent call last): File "clip_object_tracker.py", line 360, in detect() File "clip_object_tracker.py", line 160, in detect pred = yolov5_engine.infer(img) File "/home/zx/zero-shot-object-tracking/utils/yolov5.py", line 16, in infer pred = self.model(img, augment=self.augment)[0] File "/home/zx/anaconda3/envs/roboflow/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, *kwargs) File "/home/zx/zero-shot-object-tracking/models/yolo.py", line 123, in forward return self.forward_once(x, profile) # single-scale inference, train File "/home/zx/zero-shot-object-tracking/models/yolo.py", line 139, in forward_once x = m(x) # run File "/home/zx/anaconda3/envs/roboflow/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(input, **kwargs) File "/home/zx/zero-shot-object-tracking/models/common.py", line 120, in forward return torch.cat(x, self.d) RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 30 but got size 29 for tensor number 1 in the list.

zxq309 commented 2 years ago

I set --img-size to 1280 or 1536 got same errors, but set default img-size 640 could run successfully, please help me how to change inference image size