Closed gustn6591 closed 1 year ago
Hi @gustn6591 There are a few RealSense compatible plugins for GStreamer that are available.
C++ https://github.com/WKDSMRT/realsense-gstreamer
Python https://aivero.com/2020/03/10/gstreamer-elements-realsense-open-sourced/ https://gitlab.com/aivero/legacy/contrib/-/tree/master
Python and C++ https://github.com/johnny-wang16/rs-udpstreaming
If your depth map from the camera is monochrome then it sounds as though it is because pyrealsense2's colorizer is not applied to it in order to color-shade the coordinates according to their distance values. The RealSense SDK's opencv_pointcloud_viewer.py pyrealsense2 example demonstrates setting up a colorizer. The color shading is not based on the RGB sensor's color and so a colorizer will work on the RGB-less D410 depth module.
I apologize for the lack of explanation.
I got is not a depth map, just a 1D gray scale image How can I acquire depth information through an image obtained through rtsp? I want to know an example or pipeline for this method.
Another user of a Kinova arm at https://github.com/IntelRealSense/librealsense/issues/10662#issuecomment-1179378662 tried to access an RTSP stream in RealSense but gave up eventually, unfortunately.
You would probably have to convert OpenCV cv2 data to RealSense's rs2_frame format to obtain depth values. At https://github.com/IntelRealSense/librealsense/issues/2634#issuecomment-433904079 a RealSense team member provides a method of doing this with C++ but there is not an equivalent cv2 to rs2_frame demonstration script available for Python.
Is there a calculation method for estimating depth information through a 1D gray image in uint8 format?
The above question is outside of my OpenCV programming knowledge, but there is an example of a RealSense script at https://github.com/IntelRealSense/librealsense/issues/8150 of obtaining a depth value in meters for uint16_t that you could try adapting for uint8.
The depth scale value of a D410 will be 0.001
I was able to get a value similar to uint16 through (pixel/255)*65535, but a lot of errors occurred because I had already brought the value of uint8. The example shown in #8150 was an example of changing uint16 value to uint8, so I think it is different from the problem I am experiencing now.
In the process of streaming through rtsp, I think we need to find a way to get depth information of uint16.
A RealSense pyrealsense2 script at the link below for GStreamer and an RTSP server looks interesting.
https://github.com/thien94/vision_to_mavros/blob/master/scripts/d4xx_to_mavlink.py
Is there an example of a pipeline in pyrealsense2 that gets depth information from realsense connected via ethernet?
The RealSense SDK has a pyrealsense2 script called net-viewer.py that is used with the SDK's 'rs-server' ethernet RTSP network tool. The networking tool was removed from SDK 2.54.1 but I recovered the script code of net-viewer.py from the source code of 2.53.1.
It is planned that Intel will be introducing a new networking interface in the next SDK version after 2.54.1.
## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2021 Intel Corporation. All Rights Reserved.
###############################################
## Network viewer ##
###############################################
import sys
import numpy as np
import cv2
import pyrealsense2 as rs
import pyrealsense2_net as rsnet
if len(sys.argv) == 1:
print( 'syntax: python net_viewer <server-ip-address>' )
sys.exit(1)
ip = sys.argv[1]
ctx = rs.context()
print ('Connecting to ' + ip)
dev = rsnet.net_device(ip)
print ('Connected')
print ('Using device 0,', dev.get_info(rs.camera_info.name), ' Serial number: ', dev.get_info(rs.camera_info.serial_number))
dev.add_to(ctx)
pipeline = rs.pipeline(ctx)
# Start streaming
print ('Start streaming, press ESC to quit...')
pipeline.start()
try:
while True:
# Wait for a coherent pair of frames: depth and color
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
if not depth_frame or not color_frame:
continue
# Convert images to numpy arrays
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())
# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
depth_colormap_dim = depth_colormap.shape
color_colormap_dim = color_image.shape
# If depth and color resolutions are different, resize color image to match depth image for display
if depth_colormap_dim != color_colormap_dim:
resized_color_image = cv2.resize(color_image, dsize=(depth_colormap_dim[1], depth_colormap_dim[0]), interpolation=cv2.INTER_AREA)
images = np.hstack((resized_color_image, depth_colormap))
else:
images = np.hstack((color_image, depth_colormap))
# Show images
cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
cv2.imshow('RealSense', images)
k = cv2.waitKey(1) & 0xFF
if k == 27: # Escape
cv2.destroyAllWindows()
break
finally:
# Stop streaming
pipeline.stop()
print ("Finished")
Can't import pyrealsense2_net on windows 10?
Thanks for the nice examples. I am modifying that code to fit my development environment. But I can't find the pyrealsense2_net module. On the other hand, pyrealsense2 is importable.
As mentioned above at https://github.com/IntelRealSense/librealsense/issues/11993#issuecomment-1643563210 the networking tool was removed from SDK 2.54.1 and is planned to be replaced by a new networking interface in the next SDK release after 2.54.1. The existing networking tool should still be available in the previous SDK version 2.53.1.
Sorry, I checked and my pyrealsense version is 2.54.1. Thank you for your kind reply.
Downgrade the version and try.
Hi @gustn6591 Do you require further assistance with this case, please? Thanks!
Case closed due to no further comments received.
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
<Describe your issue / question / feature request / etc..>
cap_depth = cv2.VideoCapture("rtsp://192.168.1.10/depth", cv2.CAP_GSTREAMER) while(cap_depth.isOpened()): ret2, frame_depth = cap_depth.read() cv2.namedWindow('kinova_depth', cv2.WINDOW_AUTOSIZE) cv2.imshow('kinova_depth',frame_depth) if cv2.waitKey(20) & 0xFF == ord('q'): break cap_depth.release() cv2.destroyAllWindows()
Through the code above, we succeeded in forming a depthmap from the D410 attached to the kinova gen3 model. However, only gray pixel values are known from that map. I would like to know how to get distance information. Or I want to know how to connect the information of cap_depth = cv2.VideoCapture("rtsp://192.168.1.10/depth", cv2.CAP_GSTREAMER) to the pipeline of pyrealsense2.
Thank you cap_depth = cv2.VideoCapture("rtsp://192.168.1.10/depth", cv2.CAP_GSTREAMER) while(cap_depth.isOpened()): ret2, frame_depth = cap_depth.read() cv2.namedWindow('kinova_depth', cv2.WINDOW_AUTOSIZE) cv2.imshow('kinova_depth',frame_depth) if cv2.waitKey(20) & 0xFF == ord('q'): break cap_depth.release() cv2.destroyAllWindows()
Through the code above, we succeeded in forming a depthmap from the D410 attached to the kinova gen3 model. However, only gray pixel values are known from that map. I would like to know how to get distance information. Or I want to know how to connect the information of cap_depth = cv2.VideoCapture("rtsp://192.168.1.10/depth", cv2.CAP_GSTREAMER) to the pipeline of pyrealsense2.
Thank you