Open gustn6591 opened 1 year ago
Hi @gustn6591,
Depth stream is in 16bit encoding and we pass GRAY16_LE
encoding to Gstreamer to get this point cloud.
https://github.com/Kinovarobotics/ros_kortex_vision/blob/cd49bab1aa887b4791ac4d512ef181fd39361426/src/vision.cpp#L62-L66
The color stream however is in RGB8 encoding https://github.com/Kinovarobotics/ros_kortex_vision/blob/cd49bab1aa887b4791ac4d512ef181fd39361426/src/vision.cpp#L124-L127
if you launch the kinova_vision_rgbd.launch file, you will only have access to both color and depth stream, so make sure you are using the correct depth stream and not the color stream.
Best, Felix
Hello, I managed to get a grayscale image of (480,640) through the code below.
cap_depth = cv2.VideoCapture("rtsp://admin:admin@192.168.1.10/depth", cv2.CAP_GSTREAMER) while(True): ret2, frame_depth = cap_depth.read() cv2.namedWindow('kinova_depth', cv2.WINDOW_AUTOSIZE) cv2.imshow('kinova_depth',frame_depth) if cv2.waitKey(20) & 0xFF == ord('q'): break cap_depth.release() cv2.destroyAllWindows()
However, the output image was in uint8 format ranging from 0 to 255, and accurate depth information could not be obtained. I was able to get a value similar to uint16 through (pixel/255)*65535, but a lot of errors occurred because I had already brought the value of uint8. I did a lot of searching and found several ways, but I didn't know the exact way to get depth data in uint16 format from realsense(D410) attached to gen3. I know that using ros_rgbd.launch I can get the point cloud, so I'm pretty sure I can get uint16's information, but I don't know how. I want to know how to get depth information from ros_kortex_vision repo or an example, so I leave a question.
Thank you