Open 5730289021-NN opened 4 years ago
Have you tried mono16
? That ought to work. See the list of names here:
The video server won't do min/max scaling of the intensities in the video though, so you may end up with a very dark stream.
Have you solved the problem? I met the same problem. @5730289021-NN
@liamabcxyz did you try mono16
as well? ROS can definitely handle 16-bit images. Or you can do your own scaling and then feed the video server an 8-bit image. If you need to actually use the data though, you should listen on the topic.
Hi, @jveitchmichaelis Thanks for your reply! My goal is to display depth image("/camera/depth/image_rect_raw") on webpage. But I don't know how to change 16uc1 to mono16 directly. Thus, my current plan is to send sensor_msgs/Image.msg to webpage, use javascript to display the image. Thanks! I'm looking forward to your reply!
@liamabcxyz This is set in cv_bridge
so all you need to do is pass in your 16 bit image (i.e. a cv::Mat) and declare it as mono16. It depends if you're using Python or C++, but essentially you set the encoding
flag when you convert your OpenCV Mat. You need to do this at the level of your camera publisher.
Possibly modify it here:
e.g. add an additional condition to check for 16 bit images. Or, just rescale your 16-bit depth image to an 8-bit range.
else if(msg->encoding.find("16") != std::string::npos){
img = cv_bridge::toCvCopy(msg, "mono16")->image;
}else{
// Convert to OpenCV native BGR color
img = cv_bridge::toCvCopy(msg, "bgr8")->image;
}
see http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython
Hi, @jveitchmichaelis Thanks again! With your help, I write a node to convert 16uc1 to bgr8 with python. It works!
import roslib
roslib.load_manifest('converter_depth_image')
import sys
import rospy
import cv2
import numpy as np
from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
class image_converter:
def __init__(self):
self.image_pub = rospy.Publisher("image_topic_3",Image)
self.bridge = CvBridge()
self.image_sub = rospy.Subscriber("camera/depth/image_rect_raw",Image,self.callback)
def callback(self,data):
try:
data.encoding='mono16'
cv_image = self.bridge.imgmsg_to_cv2(data, "mono16")
except CvBridgeError as e:
print(e)
img_n = cv2.normalize(src=cv_image, dst=None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
im_color = cv2.cvtColor(img_n, cv2.COLOR_GRAY2BGR)
try:
self.image_pub.publish(self.bridge.cv2_to_imgmsg(im_color, "bgr8"))
except CvBridgeError as e:
print(e)
def main(args):
rospy.init_node('image_converter', anonymous=True)
ic = image_converter()
try:
rospy.spin()
except KeyboardInterrupt:
print("Shutting down")
if __name__ == '__main__':
main(sys.argv)
How was the latency with this approach?
Have you by any change figured out how to decode 16UC1 messages sent from rosbridge?
Hi, i also use a Realsense camera with the provided ros wrapper and also stumbled on this error message.
I got the depth stream working by adjusting the image_streamer.cpp like mentioned by @jveitchmichaelis.
On the ros wrapper of the realsense i changed the encoding to mono16 by changing the following lines in base_realsense_node.cpp:
Line 96
_encoding[RS2_STREAM_DEPTH] = sensor_msgs::image_encodings::TYPE_16UC1; // ROS message type
_encoding[RS2_STREAM_DEPTH] = sensor_msgs::image_encodings::MONO16; // ROS message type
and also Line119
_depth_aligned_encoding[RS2_STREAM_COLOR] = sensor_msgs::image_encodings::TYPE_16UC1;
_depth_aligned_encoding[RS2_STREAM_COLOR] = sensor_msgs::image_encodings::MONO16;
Maybe this can be helpful for others :)
encount error like this when using your decoding function :
[ERROR] [1687136715.285666]: bad callback: <bound method detect.read_depimg of <__main__.detect object at 0x7f0c7142e8>>
Traceback (most recent call last):
File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
cb(msg)
File "rstest_ros.py", line 281, in read_depimg
self.depth_image = bridge.imgmsg_to_cv2(data, "mono16")
File "/home/jetson/catkin_cvbridge_ws/src/vision_opencv/cv_bridge/python/cv_bridge/core.py", line 168, in imgmsg_to_cv2
dtype=dtype, buffer=img_msg.data)
TypeError: buffer is too small for requested array
As the data of this topic is stored as a list.
Greeting everyone,
When I run Intel Realsense D435i on Ubuntu 18.04, I've got these message when I try to access to depth sensor data aka. /camera/depth/image_rect_raw
I'm not sure whether the 16UC1 support is implemented already.
Does anyone get this problem like this? Thanks