Fast MOT base on yolo+deepsort, support yolo3 and yolo4
GNU General Public License v3.0
55
stars
13
forks
source link
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead. #18
When I run a test with a webcam or file, an error is thrown:
Traceback (most recent call last): File "video_deepsort.py", line 47, in <module> for image, detections, _ in video_detector.detect("rtsp://admin:Istaadmin1@192.168.1.100/ISAPI/Streaming/Channels/101", File "A:\ml\yolo_deepsort\yolo3\detect\video_detect.py", line 135, in detect detections = self.image_detector.detect(frame) File "A:\ml\yolo_deepsort\yolo3\detect\img_detect.py", line 86, in detect detections = self.model(image) File "C:\Users\pcdreams\miniconda3\envs\yolo_deepsort\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "A:\ml\yolo_deepsort\yolo3\models\models.py", line 308, in forward x, layer_loss = module[0](x, targets, img_dim) File "C:\Users\pcdreams\miniconda3\envs\yolo_deepsort\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "A:\ml\yolo_deepsort\yolo3\models\models.py", line 217, in forward pred_conf.view((num_samples, -1, 1)), RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
When I run a test with a webcam or file, an error is thrown:
Traceback (most recent call last): File "video_deepsort.py", line 47, in <module> for image, detections, _ in video_detector.detect("rtsp://admin:Istaadmin1@192.168.1.100/ISAPI/Streaming/Channels/101", File "A:\ml\yolo_deepsort\yolo3\detect\video_detect.py", line 135, in detect detections = self.image_detector.detect(frame) File "A:\ml\yolo_deepsort\yolo3\detect\img_detect.py", line 86, in detect detections = self.model(image) File "C:\Users\pcdreams\miniconda3\envs\yolo_deepsort\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "A:\ml\yolo_deepsort\yolo3\models\models.py", line 308, in forward x, layer_loss = module[0](x, targets, img_dim) File "C:\Users\pcdreams\miniconda3\envs\yolo_deepsort\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "A:\ml\yolo_deepsort\yolo3\models\models.py", line 217, in forward pred_conf.view((num_samples, -1, 1)), RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
whats a problem with view?
pytorch - 1.7.0 torchvision - 0.8.1 opencv-python - 4.4.0.46