IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.61k stars 4.83k forks source link

No depth data at some points, real sense D435i. #11286

Closed SarojDebnath closed 1 year ago

SarojDebnath commented 1 year ago
Required Info
Camera Model D435i
Operating System & Version WINDOWS 10
Kernel Version (Linux Only) (Not Applicable)
Platform PC
SDK Version Pyrealsense2
Language Python
Segment Robot

Issue Description

Here, I have tried to use grab cut algorithm and then retrieve the 3-D coordinates using pixel and depth information. However, few of the points in the image is having a depth information of 0 which means (0,0,0) as world coordinates. This behaviour is strange because the all of the depth data are integrated. I searched a lot at the closed and open issues of the repository but couldn't find any fruitful solution. How can I solve it?

import pyrealsense2 as rs
import numpy as np
import cv2

def distance(event,x,y,flags,param):

    if event==cv2.EVENT_LBUTTONDOWN:
        print(event)
        cv2.circle(capture,(x,y),2,(128,0,128),-1)
        print((x,y))
        d=depth_frame.get_distance(int(x),int(y))
        print(d)
        x_w, y_w, z_w = convert_depth_to_phys_coord_using_realsense(int(x),int(y), d, camera_info)
        print('Points are:',x_w, y_w, z_w)

def convert_depth_to_phys_coord_using_realsense(x, y, depth, cameraInfo):

    _intrinsics = rs.intrinsics()
    _intrinsics.width = cameraInfo.width
    _intrinsics.height = cameraInfo.height
    _intrinsics.ppx = cameraInfo.ppx
    _intrinsics.ppy = cameraInfo.ppy
    _intrinsics.fx = cameraInfo.fx
    _intrinsics.fy = cameraInfo.fy
    _intrinsics.model  = rs.distortion.none
    _intrinsics.coeffs = [i for i in cameraInfo.coeffs]
    result = rs.rs2_deproject_pixel_to_point(_intrinsics, [x, y], depth)
    return result[0], -result[1], -result[2]

pipeline = rs.pipeline()
config = rs.config()
pipeline_wrapper = rs.pipeline_wrapper(pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))
print(device_product_line)
found_rgb = False
for s in device.sensors:

    if s.get_info(rs.camera_info.name) == 'RGB Camera':
        found_rgb = True
        print("There is a depth camera with color sensor")
        break
if not found_rgb:

    print("The demo requires Depth camera with Color sensor")
    exit(0)

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
profile = pipeline.start(config)

depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: " , depth_scale)
clipping_distance_in_meters = 0.25
clipping_distance = clipping_distance_in_meters / depth_scale
print(clipping_distance)
preset_range = depth_sensor.get_option_range(rs.option.visual_preset)
for i in range(int(preset_range.max)):

    visulpreset = depth_sensor.get_option_value_description(rs.option.visual_preset,i)
    print('%02d: %s'%(i,visulpreset))
    if visulpreset == "High Accuracy":

        depth_sensor.set_option(rs.option.visual_preset, i)

depth_sensor.set_option(rs.option.laser_power, 180)

depth_sensor.set_option(rs.option.depth_units, 0.0005)
align_to = rs.stream.color
align = rs.align(align_to)

for x in range(5):

    pipeline.wait_for_frames()

try:

    while True:

        frames = pipeline.wait_for_frames()
        color_frame = frames.get_color_frame()
        depth_frame = frames.get_depth_frame()

        if color_frame:
            aligned_frames = align.process(frames)

            aligned_depth_frame = aligned_frames.get_depth_frame() # aligned_depth_frame is a 640x480 depth image
            color_frame = aligned_frames.get_color_frame()

            if not aligned_depth_frame or not color_frame:
                continue

            depth_image = np.asanyarray(aligned_depth_frame.get_data())
            color_image = np.asanyarray(color_frame.get_data())

            black_color = 0
            depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) #depth image is 1 channel, color is 3 channels
            bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), black_color, color_image)

            cv2.namedWindow('depth_cut', cv2.WINDOW_NORMAL)
            cv2.imshow('depth_cut', bg_removed)
            camera_info = aligned_depth_frame.profile.as_video_stream_profile().intrinsics
            if cv2.waitKey(1) & 0xFF==27:
                capture=bg_removed
                cv2.destroyAllWindows()
                break

finally:

    while True:

        cv2.imshow('object',capture)
        cv2.setMouseCallback('object',distance)
        if cv2.waitKey(10) & 0xFF==27:
            cv2.destroyAllWindows()
            break
        pipeline.stop()

OUTPUT:

wrong-deprojection
MartyG-RealSense commented 1 year ago

Hi @SarojDebnath Coordinates may register a depth of zero if there is no depth information at those coordinates. Even if there is a solid object present at those coordinates in the real world, the camera may not have been able to detect depth at that particular surface area. Reasons for this could include the area having a reflective surface or being colored dark grey / black. In the case of dark grey / black surfaces, if they are not reflective then casting a strong light source onto that area can help to bring out depth information from them.

Areas without depth information may appear as black, giving the impression that there is data there when in fact it is simply empty space on the image. An example would be scanning a black cable, which could result in a cable-shaped area on the image of empty space without depth data within it.

SarojDebnath commented 1 year ago

Hi @MartyG-RealSense , I agree with you at the possible reason that you have mentioned. However, I would like to know if there are some ways by which we can determine how much intense is the lighting required? Or it's just by trials that we determine the conditions for the best result?

MartyG-RealSense commented 1 year ago

It would likely depend on the material type and properties of the particular surface that is being observed and so would require tests to find the optimum lighting conditions.

If it is a reflective surface then it is possible to dampen the glare from reflections to enable it to be more easily readable by the camera. This could be done by fitting a thin-film polarizing filter product over the lenses on the outside of the camera, or applying a fine spray-on powder (such as baby powder or foot powder) to the surface or using a professional 3D modelling reflection-damping aerosol spray product (such as those used for taking pictures of jewelry for a catalog).

SarojDebnath commented 1 year ago

Thank you @MartyG-RealSense for your super fast reply. It solved some of my doubts. Can you please also mention few techniques to achieve the best depth result. I have tried to fine tune using the documentation available at the realsense but its not up to the mark.

Conditions is: My Camera is always focusing from same position and at the similar object with little variation in position.

MartyG-RealSense commented 1 year ago

If the lighting level at the camera's location is consistent all day (such as an indoor room with artificial lighting) then you may find it beneficial to disable auto-exposure and use a fixed manual exposure value that does not vary.

If an observed object is thin (such as a pen or toothpick) then changing the camera's depth scale from its default value of '0.001' to 0.0001 may help to fill in holes in the image, as described at https://github.com/IntelRealSense/librealsense/issues/8228

If the D435i camera is further than 3 meters from the observed surface then depth error will increase the further away that the surface is (as 3 meters and beyond is the point on the D435i model where the error becomes noticable). This is due to a phenomenon called RMS Error, where error increases linearly over distance as an observed object becomes further away from the camera.

There may also be depth error if the observed surface is closer to the D435i camera than its 0.1 meters / 10 cm minimum depth sensing distance. Increasing the camera's Disparity Shift value to '100' instead of the default '0' will reduce the camera's minimum distance and enable it to get closer to surfaces, at the expense of the maximum observable depth distance being reduced.

If the surface has similar looking repeating patterns horizontally and vertically, such as floor / ceiling tiles, then this can confuse the camera's depth sensing. A guide to reducing this 'repetitive pattern' negative effect can be found at the link below.

https://dev.intelrealsense.com/docs/mitigate-repetitive-pattern-effect-stereo-depth-cameras

SarojDebnath commented 1 year ago

Thank you for the informations.