Closed SunshineDou closed 5 years ago
Well, the original depth_frame
is one channel 16 bit. Are you concerned with saturating the lower bits?
Alternatively, you can use opencv
to convert colorizer
output to grayscale
thank you very much for your replying. I just want to get the original one channel 16 bit depth_frame to make datasets for 3D reconstruction "elasticfusion". but now I can only get three channel 24bit depth_frame. how can I get original one channel 16 bit depth_frame? Than you again for your help
Can you give more info please. I assume C++? Do you need to create a CV object from it? The basic is:
pipeline p;
p.start();
auto fs = p.wait_for_frames();
auto df = fs.get_depth_frame();
uint16_t* ptr = (uint16_t*)df.get_data(); // 16-bpp depth data, with 10 effective bits in every pixel
@dorodnic Thank you very much for your replying I just want to get aligned 16 bit depth image and color image to make datasets for 3D reconstruction the code is as follows: `import pyrealsense2 as rs import numpy as np import cv2 pipeline = rs.pipeline() config = rs.config() config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30) config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
profile = pipeline.start(config) depth_sensor = profile.get_device().first_depth_sensor() depth_scale = depth_sensor.get_depth_scale() print("Depth Scale is: " , depth_scale)
clipping_distance_in_meters = 1 #1 meter clipping_distance = clipping_distance_in_meters / depth_scale
align_to = rs.stream.color align = rs.align(align_to) i=1 j=1 try: while True: frames = pipeline.wait_for_frames()
aligned_frames = align.process(frames)
aligned_depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame is a 640x480 depth image
color_frame = aligned_frames.get_color_frame()
if not aligned_depth_frame or not color_frame:
continue
depth_image = (uint16_t*)(np.asanyarray(aligned_depth_frame.get_data()))
color_image = np.asanyarray(color_frame.get_data())
grey_color = 153
depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) #depth image is 1 channel, color is 3 channels
bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
images = np.hstack((bg_removed, depth_colormap))
cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)
cv2.imshow('Align Example', images)
cv2.imwrite("//home//cindy//桌面//LabTest1//lab_depth//"+str(i)+".png",depth_image) #保存到指定目录
cv2.imwrite("//home//cindy//桌面//LabTest1//lab_color//"+str(i)+".png",color_image)
i = i+1
j = j+1
key = cv2.waitKey(1)
if key & 0xFF == ord('q') or key == 27:
cv2.destroyAllWindows()
break
finally: pipeline.stop()` but it failed, what's the problem? thank you very much for your help
[Realsense Customer Engineering Team Comment] Hi @SunshineDou,
What do you mean "fail"? any error message? Is the pyrealsense library there?
@RealSense-Customer-Engineering Oh, I have solved it, It can get 16bit depth image now. Thank you very much sincerely~if I have any other question, I will open another issue
@RealSense-Customer-Engineering Oh, I have solved it, It can get 16bit depth image now. Thank you very much sincerely~if I have any other question, I will open another issue
hello, I want to do the same things like you. I want to use realsense to save pictures like TUM dataset. Could you tell me how to save depth images?
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hello, according to #1831 that I can get "Black to White"depth image now, but the depth image is still 24 bit, that's to say, it has three channels, but I want to get one channel 16 bit depth image. what should I do?