Closed adnankarimjs closed 1 year ago
I received the same error
RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.HalfTensor) should be the same
and by installing version 3.1.1 I was able to run my super-gradient code snippet,
pip install super-gradients==3.1.1
I assume it is related to one of the latest changes from version 3.1.2
I have the same issue, hopefully it gets fixed soon
Like what @mlampros said , try downgrading it to
super-gradients==3.1.1
This should be a temporary work around until the bug gets fixed.
Thanks, but there you have the slow iteration bug.
Anyway, I found a solution: The weights are upwards compatible, if you want to run the inference with e.g. 3.1.3, you have to train with the same version as well.
IE to say I have to train the model on
super-gradients==3.1.3
as well? Because the model I trained and inference were both in the same versions and I still encountered the same error
hmm ok, in my case it worked. But I think they may have some problems anyway. Since there are a few other bugs which are related have not been solved yet.
I changed the super-gradients from 3.1.2 to 3.1.1 but it doesn't seem to take effect
there were commits to address this issue in the last days using the github version I was able to resolve my issue, i.e.
git clone https://github.com/Deci-AI/super-gradients.git
cd super-gradients
pip3 install .
Fixed in https://github.com/Deci-AI/super-gradients/pull/1281 Will be available in 3.2.0
🐛 Describe the bug
python3 app.py The console stream is logged into /home/sacramentos/sg_logs/console.log [2023-07-03 02:48:24] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it [2023-07-03 02:48:25] WARNING - init.py - Failed to import pytorch_quantization [2023-07-03 02:48:25] WARNING - calibrator.py - Failed to import pytorch_quantization [2023-07-03 02:48:25] WARNING - export.py - Failed to import pytorch_quantization [2023-07-03 02:48:25] WARNING - selective_quantization_utils.py - Failed to import pytorch_quantization Predicting Video: 0%| | 0/1502 [00:00<?, ?it/s][2023-07-03 02:48:28] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting
yolo_nas_l.predict("hello.mp4").show()
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_e.py", line 99, in predict
return pipeline(images) # type: ignore
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 94, in call
return self.predict_video(inputs, batch_size)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 122, in predict_video
return self._combine_image_prediction_to_video(result_generator, fps=fps, n_images=len(video_frames))
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 299, in _combine_image_prediction_to_video
images_predictions = [image_predictions for image_predictions in tqdm(images_predictions, total=n_images, desc="Predicting Video")]
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 299, in
images_predictions = [image_predictions for image_predictions in tqdm(images_predictions, total=n_images, desc="Predicting Video")]
File "/home/sacramentos/.local/lib/python3.10/site-packages/tqdm/std.py", line 1178, in iter
for obj in iterable:
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 149, in _generate_prediction_result
yield from self._generate_prediction_result_single_batch(batch_images)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 176, in _generate_prediction_result_single_batch
model_output = self.model(torch_inputs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_e.py", line 120, in forward
return self.head(features)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_head.py", line 284, in forward
return self.forward_eval(feats)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_head.py", line 241, in forward_eval
pred_bboxes = batch_distance2bbox(anchor_points_inference, reg_dist_reduced_list) stride_tensor # [B, Anchors, 4]
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/utils/bbox_utils.py", line 19, in batch_distance2bbox
x1y1 = -lt + points
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
sacramentos@sacramentos-System-Product-Name:~/Desktop/TensorRT$ python3 app.py
The console stream is logged into /home/sacramentos/sg_logs/console.log
[2023-07-03 02:51:27] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it
[2023-07-03 02:51:29] WARNING - init.py - Failed to import pytorch_quantization
[2023-07-03 02:51:29] WARNING - calibrator.py - Failed to import pytorch_quantization
[2023-07-03 02:51:29] WARNING - export.py - Failed to import pytorch_quantization
[2023-07-03 02:51:29] WARNING - selective_quantization_utils.py - Failed to import pytorch_quantization
[2023-07-03 02:51:30] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting
yolo_nas_l.predict("test.png").show()
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_e.py", line 99, in predict
return pipeline(images) # type: ignore
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 96, in call
return self.predict_images(inputs, batch_size)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 111, in predict_images
return self._combine_image_prediction_to_images(result_generator, n_images=len(images))
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 290, in _combine_image_prediction_to_images
images_predictions = [next(iter(images_predictions))]
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 149, in _generate_prediction_result
yield from self._generate_prediction_result_single_batch(batch_images)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 176, in _generate_prediction_result_single_batch
model_output = self.model(torch_inputs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call( args, kwargs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_e.py", line 120, in forward
return self.head(features)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_head.py", line 284, in forward
return self.forward_eval(feats)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_head.py", line 241, in forward_eval
pred_bboxes = batch_distance2bbox(anchor_points_inference, reg_dist_reduced_list) stride_tensor # [B, Anchors, 4]
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/utils/bbox_utils.py", line 19, in batch_distance2bbox
x1y1 = -lt + points
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
sacramentos@sacramentos-System-Product-Name:~/Desktop/TensorRT$ python3 app.py
The console stream is logged into /home/sacramentos/sg_logs/console.log
[2023-07-03 02:51:39] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it
[2023-07-03 02:51:40] WARNING - init.py - Failed to import pytorch_quantization
[2023-07-03 02:51:40] WARNING - calibrator.py - Failed to import pytorch_quantization
[2023-07-03 02:51:40] WARNING - export.py - Failed to import pytorch_quantization
[2023-07-03 02:51:40] WARNING - selective_quantization_utils.py - Failed to import pytorch_quantization
[2023-07-03 02:51:41] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting
yolo_nas_l.predict("test.png").show()
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_e.py", line 99, in predict
return pipeline(images) # type: ignore
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 96, in call
return self.predict_images(inputs, batch_size)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 111, in predict_images
return self._combine_image_prediction_to_images(result_generator, n_images=len(images))
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 290, in _combine_image_prediction_to_images
images_predictions = [next(iter(images_predictions))]
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 149, in _generate_prediction_result
yield from self._generate_prediction_result_single_batch(batch_images)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 176, in _generate_prediction_result_single_batch
model_output = self.model(torch_inputs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call( args, *kwargs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_e.py", line 120, in forward
return self.head(features)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_head.py", line 284, in forward
return self.forward_eval(feats)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_head.py", line 241, in forward_eval
pred_bboxes = batch_distance2bbox(anchor_points_inference, reg_dist_reduced_list) * stride_tensor # [B, Anchors, 4]
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/utils/bbox_utils.py", line 19, in batch_distance2bbox
x1y1 = -lt + points
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
sacramentos@sacramentos-System-Product-Name:~/Desktop/TensorRT$ python3 app.py
The console stream is logged into /home/sacramentos/sg_logs/console.log
[2023-07-03 02:52:07] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it
[2023-07-03 02:52:09] WARNING - init.py - Failed to import pytorch_quantization
[2023-07-03 02:52:09] WARNING - calibrator.py - Failed to import pytorch_quantization
[2023-07-03 02:52:09] WARNING - export.py - Failed to import pytorch_quantization
[2023-07-03 02:52:09] WARNING - selective_quantization_utils.py - Failed to import pytorch_quantization
[ WARN:0@1.690] global cap_v4l.cpp:982 open VIDEOIO(V4L2:/dev/video0): can't open camera by index
[ERROR:0@1.690] global obsensor_uvc_stream_channel.cpp:156 getStreamChannelGroup Camera index out of range
Traceback (most recent call last):
File "/home/sacramentos/Desktop/TensorRT/app.py", line 14, in
yolo_nas_l.predict_webcam()
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_e.py", line 110, in predict_webcam
pipeline.predict_webcam()
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 132, in predict_webcam
video_streaming = WebcamStreaming(frame_processing_fn=_draw_predictions, fps_update_frequency=1)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/utils/media/stream.py", line 33, in init
raise ValueError("Could not open video capture device")
ValueError: Could not open video capture device
sacramentos@sacramentos-System-Product-Name:~/Desktop/TensorRT$ python3 app.py
The console stream is logged into /home/sacramentos/sg_logs/console.log
[2023-07-03 02:52:29] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it
[2023-07-03 02:52:30] WARNING - init.py - Failed to import pytorch_quantization
[2023-07-03 02:52:30] WARNING - calibrator.py - Failed to import pytorch_quantization
[2023-07-03 02:52:30] WARNING - export.py - Failed to import pytorch_quantization
[2023-07-03 02:52:30] WARNING - selective_quantization_utils.py - Failed to import pytorch_quantization
[ WARN:0@1.689] global cap_v4l.cpp:982 open VIDEOIO(V4L2:/dev/video0): can't open camera by index
[ERROR:0@1.689] global obsensor_uvc_stream_channel.cpp:156 getStreamChannelGroup Camera index out of range
Traceback (most recent call last):
File "/home/sacramentos/Desktop/TensorRT/app.py", line 14, in
yolo_nas_l.predict_webcam().show()
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_e.py", line 110, in predict_webcam
pipeline.predict_webcam()
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 132, in predict_webcam
video_streaming = WebcamStreaming(frame_processing_fn=_draw_predictions, fps_update_frequency=1)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/utils/media/stream.py", line 33, in init
raise ValueError("Could not open video capture device")
ValueError: Could not open video capture device
sacramentos@sacramentos-System-Product-Name:~/Desktop/TensorRT$ python3 app.py
The console stream is logged into /home/sacramentos/sg_logs/console.log
[2023-07-03 02:52:42] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it
[2023-07-03 02:52:44] WARNING - init.py - Failed to import pytorch_quantization
[2023-07-03 02:52:44] WARNING - calibrator.py - Failed to import pytorch_quantization
[2023-07-03 02:52:44] WARNING - export.py - Failed to import pytorch_quantization
[2023-07-03 02:52:44] WARNING - selective_quantization_utils.py - Failed to import pytorch_quantization
[ WARN:0@1.697] global cap_v4l.cpp:982 open VIDEOIO(V4L2:/dev/video0): can't open camera by index
[ERROR:0@1.697] global obsensor_uvc_stream_channel.cpp:156 getStreamChannelGroup Camera index out of range
Traceback (most recent call last):
File "/home/sacramentos/Desktop/TensorRT/app.py", line 13, in
yolo_nas_l.predict_webcam().show()
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/pp_yolo_e/pp_yolo_e.py", line 110, in predict_webcam
pipeline.predict_webcam()
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 132, in predict_webcam
video_streaming = WebcamStreaming(frame_processing_fn=_draw_predictions, fps_update_frequency=1)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/utils/media/stream.py", line 33, in init
raise ValueError("Could not open video capture device")
ValueError: Could not open video capture device
sacramentos@sacramentos-System-Product-Name:~/Desktop/TensorRT$ python3 app.py
The console stream is logged into /home/sacramentos/sg_logs/console.log
[2023-07-03 02:55:25] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it
[2023-07-03 02:55:27] WARNING - init.py - Failed to import pytorch_quantization
[2023-07-03 02:55:27] WARNING - calibrator.py - Failed to import pytorch_quantization
[2023-07-03 02:55:27] WARNING - export.py - Failed to import pytorch_quantization
[2023-07-03 02:55:27] WARNING - selective_quantization_utils.py - Failed to import pytorch_quantization
Downloading: "https://sghub.deci.ai/models/yolox_t_coco.pth" to /home/sacramentos/.cache/torch/hub/checkpoints/yolox_t_coco.pth
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 58.4M/58.4M [00:36<00:00, 1.70MB/s]
[2023-07-03 02:56:05] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting
yolo_nas_l.predict("test.png").show()
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/yolo_base.py", line 488, in predict
return pipeline(images) # type: ignore
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 96, in call
return self.predict_images(inputs, batch_size)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 111, in predict_images
return self._combine_image_prediction_to_images(result_generator, n_images=len(images))
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 290, in _combine_image_prediction_to_images
images_predictions = [next(iter(images_predictions))]
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 149, in _generate_prediction_result
yield from self._generate_prediction_result_single_batch(batch_images)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/pipelines/pipelines.py", line 176, in _generate_prediction_result_single_batch
model_output = self.model(torch_inputs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/yolo_base.py", line 507, in forward
out = self._backbone(x)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/yolo_base.py", line 258, in forward
return AbstractYoloBackbone.forward(self, x)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/training/models/detection_models/yolo_base.py", line 239, in forward
x = layer_module(x)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/super_gradients/modules/conv_bn_act_block.py", line 84, in forward
return self.act(self.bn(self.conv(x)))
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/sacramentos/.local/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.HalfTensor) should be the same
fuse_model=False
Predicting Video: 0%| | 0/1502 [00:01<?, ?it/s] Traceback (most recent call last): File "/home/sacramentos/Desktop/TensorRT/app.py", line 14, infuse_model=False
Traceback (most recent call last): File "/home/sacramentos/Desktop/TensorRT/app.py", line 14, infuse_model=False
Traceback (most recent call last): File "/home/sacramentos/Desktop/TensorRT/app.py", line 14, infuse_model=False
Traceback (most recent call last): File "/home/sacramentos/Desktop/TensorRT/app.py", line 14, in