IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.5k stars 4.81k forks source link

Converting numpy depth_map back into pyrealsense2 depth frame #5784

Open LMurphy99 opened 4 years ago

LMurphy99 commented 4 years ago
Required Info
Camera Model {D435i}
Firmware Version (N/A)
Operating System & Version {Win (10)
Kernel Version (Linux Only) (N/A)
Platform PC
SDK Version {N/A }
Language {python }
Segment {N/A}

Issue Description

I have been working on pre-processing depth maps before I convert them to pointclouds. This involved converting the depth_frame.get_data() into a numpy array, and doing some processing (i.e, converting all pixels of no interest to nan). I wanted to use the rs.pointcloud().calculate() function to calculate my point cloud, with my altered depth map as input. I'm getting the following error however..

"TypeError: calculate(): incompatible function arguments. The following argument types are supported:

  1. (self: pyrealsense2.pyrealsense2.pointcloud, depth: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.points"

i was wondering how I would go about converting my numpy depth map into a format that the function can accept.

Attached is the code of note Capture

zapaishchykova commented 4 years ago

following

rcroset commented 4 years ago

Interested in an answer too !

r-ratajczak commented 4 years ago

Hi, I also try to convert back the numpy array to BufData in order to reuse the modified data. I tried casting, memoryview, creating new BufData objects but without any success. Is there any possibility to recreate BufData object? The environment returns an error, that there is no defined constructor.

RealSenseCustomerSupport commented 4 years ago

@LMurphy99 Sorry for late response. Have submitted your requirement to engineering team. Any update will let you know. Thanks!

jts3k commented 4 years ago

following

adityashrm21 commented 3 years ago

Are there any updates on this? Thanks!

RealSenseSupport commented 3 years ago

@adityashrm21 Sorry, this enhancement is still under working. Any update will let you know. Thanks!

tsaizhenling commented 3 years ago

following

MartyG-RealSense commented 3 years ago

Hi everyone,

The BufData object creation question has been answered by a RealSense team member in the link below.

https://github.com/IntelRealSense/librealsense/issues/8394#issuecomment-782811459

This case will remain open though due to the ongoing Enhancement feature request associated with it that is related to the original question.

kailkuhn commented 3 years ago

@MartyG-RealSense are there any updates to this? thanks

MartyG-RealSense commented 3 years ago

Hi @kailkuhn If you are asking about the official feature request for numpy depth map conversion, there are no updates to report.

In regard to exploration of numpy conversion by RealSense users, the link below may provide you with some further useful insights.

https://github.com/IntelRealSense/librealsense/issues/2551

DeepakChandra2612 commented 2 years ago

Hi All,

Any updates on this feature to convert the numpy array back to real sense depth frames to utilize the inbuilt functions of pyrealsense2?

Thanks

MartyG-RealSense commented 2 years ago

Hi @DeepakChandra2612 There is no further information to report about the official feature request to Intel, though it remains active. I am not aware of any further progress that RealSense users have made themselves on developing numpy to rs2::frame conversion either since the https://github.com/IntelRealSense/librealsense/issues/2551 link already mentioned.

sanjaiiv04 commented 9 months ago

I am trying to get a region of the depth map and convert it into point cloud. To do so, I have cropped the depth map to get the ROI I wanted and in doing so, the depth frame was converted into a numpy array. I want to convert this numpy array back to depth frame format so that I can extract point clouds from the cropped region. How to do that? Here is the code:

pipeline = rs.pipeline() config = rs.config() colorizer = rs.colorizer() dec_filter = rs.decimation_filter () # Decimation - reduces depth frame density spat_filter = rs.spatial_filter() # Spatial - edge-preserving spatial smoothing temp_filter = rs.temporal_filter()

pipeline_wrapper = rs.pipeline_wrapper(pipeline) pipeline_profile = config.resolve(pipeline_wrapper) device = pipeline_profile.get_device() device_product_line = str(device.get_info(rs.camera_info.product_line))

found_rgb = False for s in device.sensors: if s.get_info(rs.camera_info.name) == 'RGB Camera': found_rgb = True break if not found_rgb: print("The demo requires Depth camera with Color sensor") exit(0)

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)

if device_product_line == 'L500': config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30) else: config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

profile = pipeline.start(config) pc = rs.pointcloud() points = rs.points() depth_sensor = profile.get_device().first_depth_sensor() depth_scale = depth_sensor.get_depth_scale()

clipping_distance_in_meters = 1 #1 meter clipping_distance = clipping_distance_in_meters / depth_scale intrinsics_depth = profile.get_stream(rs.stream.depth).as_video_stream_profile().get_intrinsics() intrinsics_color = profile.get_stream(rs.stream.color).as_video_stream_profile().get_intrinsics()

pcd_depth_object = o3d.geometry.PointCloud() try:

Wait for a coherent pair of frames: depth and color

frames = pipeline.wait_for_frames()
colorized = colorizer.process(frames)

#aligning the rgb and the depth frames
align = rs.align(rs.stream.color)
frames = align.process(frames)  

#Applying hole filter
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
'''
depth_frame = dec_filter.process(depth_frame)
depth_frame = spat_filter.process(depth_frame)
depth_frame = temp_filter.process(depth_frame)'''

# Convert images to numpy arrays
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())

#Creating a binary mask of the same size as depth map for cropping
mask_image = np.zeros_like(depth_image, dtype=np.uint8)

#Obect ID for filtering the desired classes for detection
desired_class_ids=[0]

#Segmentation
results=model.predict(source=color_image,stream=True)
for person in results:
    indexes = [index for index, item in enumerate(person.boxes.cls) if item in desired_class_ids]
    mask=person.masks[indexes].xy #getting mask cordinates for each object detected in each frame
    #print(mask)
    for i in range(len(mask)): #iterating through each mask 

        act_mask = mask[i].tolist()  # Assuming the mask data is a list of coordinates

        # Convert the mask data into a list of contours
        mask_contours = [np.array(act_mask, dtype=np.int32)]

        cv2.fillPoly(mask_image, mask_contours, color=255) #Filling the mask_image with white pixels in place of mask coordinates
        #cv2.fillPoly(color_image, mask_contours, color=(0, 0, 255))

# Apply colormap on depth image (image must be converted to 8-bit per pixel first)

depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.05), cv2.COLORMAP_JET)
depth_object = cv2.bitwise_and(depth_image, depth_image, mask=mask_image).

Here, depth_object is the cropped depth frame which is in numpy array format. I need to convert it into realsense depth frame format for extracting point clouds from it.

MartyG-RealSense commented 9 months ago

Hi @sanjaiiv04 Converting numpy to rs2::frame remains an unsolved problem where there is no further advice available, unfortunately.

charlielito commented 4 months ago

depth_image

It's mind-blowing that 4 years later you haven't added support for this, unbelievable

mxshen commented 2 months ago

I am waiting for the answer in 2024....