Semantic ELevation (SEL) map is a semantic Bayesian inferencing framework for real-time elevation mapping and terrain property estimation from RGB-D images.
Hi, thank you for your shared work. it's really interesing.
However, when I try to use my own dataset, I consider azure kinect V3 camera, but the error happen as follows:
[INFO] [1710521970.499223]: [sel_map] Message received!
/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_segmentation/sel_map_segmentation/src/sel_map_segmentation/cameraSensor.py:57: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)
self.depth = torch.from_numpy(depth).float().to(self.network.device, non_blocking=True)
[ERROR] [1710521972.433301]: bad callback: <bound method Subscriber.callback of <message_filters.Subscriber object at 0x7f955172c7f0>>
Traceback (most recent call last):
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback
cb(msg)
File "/opt/ros/noetic/lib/python3/dist-packages/message_filters/__init__.py", line 76, in callback
self.signalMessage(msg)
File "/opt/ros/noetic/lib/python3/dist-packages/message_filters/__init__.py", line 58, in signalMessage
cb(*(msg + args))
File "/opt/ros/noetic/lib/python3/dist-packages/message_filters/__init__.py", line 330, in add
self.signalMessage(*msgs)
File "/opt/ros/noetic/lib/python3/dist-packages/message_filters/__init__.py", line 58, in signalMessage
cb(*(msg + args))
File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map/scripts/main_kinect.py", line 114, in syncedCallback
map.update(pose, rgbd, intrinsic=intrinsic, R=R, min_depth=0.5, max_depth=8.0)
File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_mapper/src/sel_map_mapper/elmap.py", line 267, in update
points = self.camera.getProjectedPointCloudWithLabels(intrinsic=intrinsic, R=R, min_depth=min_depth, max_depth=max_depth)
File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_segmentation/sel_map_segmentation/src/sel_map_segmentation/cameraSensor.py", line 249, in getProjectedPointCloudWithLabels
pc, scores = self.projectMeasurementsIntoSensorFrame_torch(intrinsic=intrinsic, R=R, min_depth=min_depth, max_depth=max_depth)
File "/home/yc/catkin_map/catkin_michgen/src/sel_map/sel_map_segmentation/sel_map_segmentation/src/sel_map_segmentation/cameraSensor.py", line 142, in projectMeasurementsIntoSensorFrame_torch
scores = torch.transpose(torch.reshape(self.scores, (pixel_length, self.scores.shape[2])), 0, 1)
RuntimeError: shape '[368640, 59]' is invalid for input of size 54374400
I try the /depth/image_raw, /depth/camera_info, /rgb/image_raw and /depth/image_raw, /depth_to_rgb/camera_info, /rgb/image_raw, however, such error always exist.
Can you tell me how to fix it? thangs for your reply here.
Hi, thank you for your shared work. it's really interesing. However, when I try to use my own dataset, I consider azure kinect V3 camera, but the error happen as follows:
I try the
/depth/image_raw, /depth/camera_info, /rgb/image_raw
and/depth/image_raw, /depth_to_rgb/camera_info, /rgb/image_raw
, however, such error always exist. Can you tell me how to fix it? thangs for your reply here.