IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.49k stars 4.8k forks source link

D435 Camera, The edge of object problem #10133

Closed GHCK57 closed 2 years ago

GHCK57 commented 2 years ago

Required Info
Camera Model { D400 }
Firmware Version 05.12.15.50)
Operating System & Version {Win (10)
SDK Version { legacy / 2.<?>.<?> }
Language {python }
Segment {others }

Issue Description

Hi,

While working with Realsense D435 camera I faced with a problem. When the camera is close to object approximately 35-40 cm, depth frame looks good. But when I put the camera more farther approximately 70cm. The edges of object are deformed.

I use the "opencv_pointcloud_viewer.py " shared by Realsense to get the point cloud data. This code -->

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_pointcloud_viewer.py

That is what I want to measure. There is a surface and there is a object on this surface. I am just looking the edges.

WIN_20220106_12_15_12_Pro

This photo shows the .ply file taken from camera 70 cm far away

70cm

And also this one is taken from 35 cm far away

35cm

As you can see, there is a clear difference in the edges of the object. I have to work 70cm far away. Why does this difference occur? How can I deal with this problem. I just want to flat edge.

I hope you understood me. I wait for your helping. Thanks a lot.

MartyG-RealSense commented 2 years ago

HI @GHCK57 Whist the box looks like it has more edges detected at 35 cm, the blue edges actually likely represent empty areas of missing depth data rather than depth detail of the box edges. The same is true for the increased numbers of blue holes on the table surface.

It also looks like at 70 cm range, the camera is having difficulty distinguishing the plain white edges of the box from the similar plain white texture of the table.

You may achieve a better quality image if you ensure that the IR emitter is enabled in your Python script. When the emitter is enabled, it casts a semi-random pattern of dots onto surfaces in the scene that can be used as a 'texture source' for analyzing low-textured or non-textured surfaces (tables, doors, walls, etc) for depth detail.

You could try integrating the Python code below into opencv_pointcloud_viewer.py, which enables the IR emitter with the set_option(rs.option.emitter_enabled, 1) instruction.

import pyrealsense2 as rs
pipeline = rs.pipeline()
config = rs.config()
pipeline_profile = pipeline.start(config)
device = pipeline_profile.get_device()
depth_sensor = device.query_sensors()[0]
if depth_sensor.supports(rs.option.emitter_enabled):
depth_sensor.set_option(rs.option.emitter_enabled, 1)
GHCK57 commented 2 years ago

Hi MartyG,

Thanks for your reply. I insterted the these lines to opencv_pointcloud_viewe.py file after this code pipeline.start(config)

depth_sensor = device.query_sensors()[0]

if depth_sensor.supports(rs.option.emitter_enabled): depth_sensor.set_option(rs.option.emitter_enabled, 1)

Also I used a different color box.( The Realsense camera's orijinal box). But still when I get the point cloud data from different distances, the problem is going on.

This photo how camera is looking the object.

WIN_20220106_14_51_25_Pro

This is how opencv_pointcloud_viewe.py shows the object.

image

-- Camera far away 70cm from the object--

image

-- Camera far away 35cm from the object-- image

Still, 35 cm is better than 70cm. What is the problem ?

GHCK57 commented 2 years ago

Also I used the function to enable infrared stream but it didn't change.

config.enable_stream(rs.stream.infrared, rs.format.y8, 30)

MartyG-RealSense commented 2 years ago

It appears that enabling the IR emitter has at least closed up most of the holes in the table surrounding the box, so that is an improvement.

I am reminded of another Python box scanning case at https://github.com/IntelRealSense/librealsense/issues/6857#issuecomment-660532275 where the edge of the box seemed to be pointing diagonally like your 70 cm image above instead of 90 degrees straight downwards. Their eventual solution - shared in https://github.com/IntelRealSense/librealsense/issues/6857#issuecomment-660540868 - was to create a custom json camera configuration file and load it into their project to adjust the camera settings to produce a more accurate box pointcloud.

The increased bumpiness of the cloud at 70 cm compared to 35 cm is a known phenomenon referred to as 'waviness', where the surface becomes increasingly wavy as the camera moves further away from the observed surface. A RealSense user in https://github.com/IntelRealSense/librealsense/issues/1375#issuecomment-373561886 found that using a custom json could reduce some of the waviness.

A RealSense team member also suggested in https://github.com/IntelRealSense/librealsense/issues/1375#issuecomment-373726950 that applying post-processing filtering such as edge-preserving and temporal averaging could help to reduce waviness. These concepts are covered in Intel's post-processing guide in the link below. Performing a page 'find' operation for the terms edge-preserving and temporal averaging can help to find relevant sections of the guide quickly.

https://dev.intelrealsense.com/docs/depth-post-processing

GHCK57 commented 2 years ago

Hi @MartyG-RealSense ,

I saw and read the topic #6857 and #1375 before and I tried "ShortrangePreset.json" but it didn't solve my problem. In addition to these topic and read this white paper you shared with me https://dev.intelrealsense.com/docs/depth-post-processing . The photos shared in here before I shared them I exported .ply and viewed on meshlab. I think there is no viewer problem.

I tried a lot of configuration on Intel RealSense Viewer v2.48 GUI, What I did whatever, I didn't get precision point cloud in comparison with the position far away 35 cm from object.

This is a different configuration. But camera trying to match edge of object and surface. ( Camera mounted 70cm far away from object)

image

depth frame

image

color frame

image

Infrared frame

image

MartyG-RealSense commented 2 years ago

Applying a post-processing filter that has a hole-filling function will likely help to close up the black area surrounding the box. You can test this in the RealSense Viewer by going to the Post-Processing category of the options side-panel and left-clicking on the red icon beside the Hole-Filling filter (which indicates a Disabled status) to turn it to blue (enabled).

image

If you find that hole-filling works well for you, you could implement hole-filling in your Python application by applying the Spatial post-processing filter.

image

GHCK57 commented 2 years ago

Thanks for your reply. But I don't need hole filling filter. It is not suitable for my application. I reaaly dont know what is the problem. The depth frame is good for camera far away 35 cm but while the distance is 70cm, it measure wrong.

I just want this depth frame when I put the camera far away from object

image

MartyG-RealSense commented 2 years ago

How does the box look in the RealSense Viewer if you change the Depth Units option from its default of 0.001 to 0.0001. You can left-click on the pencil icon beside the option to type the value in on the keyboard instead of using the adjustment slider.

image

GHCK57 commented 2 years ago

Hi @MartyG-RealSense

I tried it too before.

For Depth Units --> 0.001

image

For Depth Units --> 0.0001

image

MartyG-RealSense commented 2 years ago

If your goal is to measure the box then the SDK Python example program box_dimensioner_multicam may meet your needs. It works with a single camera, despite the program's name.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam

In that program, the box is placed upon a print-out of a chessboard to calibrate the camera position to the board and then a bounding-box is automatically placed around the box on the chessboard on an RGB image and volume measurements for the box that are calculated from the depth data are displayed.

image

GHCK57 commented 2 years ago

Hi @MartyG-RealSense

I worked with this code. I need 3D constraction while calculating the dimensions of the box. That's why I need a good depth frame. Due to the point cloud data taken from 70cm is bad, everything going false. As you see from photos that I shared before, The camera seems the edge of box and surface like as same. This problem makes calculating the volume impossible. Please I need more advice about it . Is it caused for IR pattern emitting from camera or caused from something else ?

MartyG-RealSense commented 2 years ago

If your project is able to take measurements manually instead of them needing to be automatic then you could try using the RealSense Viewer's Measure tool in its 3D pointcloud mode, which enables you to drag out a line between two points on the pointcloud image and receive a measurement of the real-world distance between those two points, or hold the shift key to connect more than 2 points and measure the area.

image

If you prefer to continue with the current method then I would recommend testing with the camera at a straight-ahead or straight-down angle (0 or 90 degrees) instead of the camera being tilted, as indicated by the tilted angle of the table. The angle of the camera may be distorting the point cloud, so having the camera at a straight angle will confirm whether or not the camera angle is the cause.

GHCK57 commented 2 years ago

I did what you said. I put the camera 90 degree above of the object. The camera aproximately 60cm far from the object. The configurations are didn't changed. Everything is same. It looks like this. Still the camera connected the surface of table and edge of object.

image

When I put it the camera 40cm above of the box. Everythings are fine. That is what I want. But I have to work from 70-75 cm distance

image

MartyG-RealSense commented 2 years ago

I attempted to replicate the test in the Viewer with a pack of drinks bottles but did not experience the edge loss at 70 cm.

35 cm

image

70 cm

image

Could you check whether disabling the two GLSL options in the Viewer's settings before taking the scan improves your edge detection, please? This can be done with the instructions at https://github.com/IntelRealSense/librealsense/issues/8110#issuecomment-754705023

Also, try completely disabling all post-processing filters that the Viewer applies by default by left-clicking on the blue icon beside Post-Processing to turn the icon to red and turning off all filters.

image

GHCK57 commented 2 years ago

Hi again @MartyG-RealSense

First of all your data taken from 70cm is perfect.

I tried the disable GLSL options. It didn't work. But I explored something. I will try to explaint step by step because what have been done I dont know. I need your help to figure out the solution.

First I want to share the photo that after GLSL options were disabled. It seems nothing changed.

image

After while looking the #8110 topic I saw your comment. https://github.com/IntelRealSense/librealsense/issues/8110#issuecomment-754653180

I checked my tools under ProgramFiles(x86)--> Intel RealSense SDK 2.0 . I tried to run exe files. After the start the rs-measure.exe I saw the difference. The sides of the box are starting to look good. But I didn't change anything. All parameters are same. Just I run the rs-measure.exe . After that It is a picture taken from the camera.

There are still some noise but better than old one. image

What is changed it ?

MartyG-RealSense commented 2 years ago

The main difference that comes to mind about rs-measure is that instead of performing depth to color alignment, it performs the much rarer color to depth alignment. On the D435 and D435i models, which have a smaller field of view (FOV) size on the color sensor than on the depth sensor, the overall image is therefore stretched out to fit the full size of the window without letterbox borders at the edges as the color FOV resizes to match the larger depth FOV size.

GHCK57 commented 2 years ago

Okay. If I hardware reset the camera using Realsense Viewer, everything will be back to old. What should I do next to get the correct depth frame.

image

MartyG-RealSense commented 2 years ago

Is there any positive change in the Viewer image if you maximize the Laser Power option to '360' instead of the default '150'? Increasing laser power makes the IR dot pattern more visible to the camera and can increase the amount of depth detail on an image.

image

GHCK57 commented 2 years ago

Hi @MartyG-RealSense

Sorry for my late reply. I generally set the laser power option to 330. But as I see there is no difference can see with eye.

MartyG-RealSense commented 2 years ago

I looked at the code of rs-measure to see what might account for the image improving when the program is run but reverting to the worse image after a hardware reset. When rs-measure is run, it loads and applies the High Accuracy camera configuration preset. When a camera reset is performed, the camera would return to its default configuration.

Ypu could therefore try adding a preset load instruction for 'High Accuracy' to your opencv_pointcloud_viewer.py script using Python code in https://github.com/IntelRealSense/librealsense/issues/2577#issuecomment-432137634 or instead selecting the preset from the presets drop-down menu in the Viewer.

image

GHCK57 commented 2 years ago

It looks pretty good. After a hardware reset, before open the stream, I choose the High Accuracy camera configuration preset. The box look like this.

image

There is almost no edge connection. But if you look carefully, you can see the try to connect edge of object and surface of table.

image

MartyG-RealSense commented 2 years ago

How does it look if you change "High Accuracy" to "Medium Density"

The Medium Density preset provides a good balance between accuracy and the amount of detail on the image.

GHCK57 commented 2 years ago

Medium Density Preset

image

MartyG-RealSense commented 2 years ago

Is the above image from 70-75 cm distance?

GHCK57 commented 2 years ago

Yes it is.

GHCK57 commented 2 years ago

Hi @MartyG-RealSense

This is color frame taken by camera.

image

This is point cloud data.

image

I get the point cloud of that box. I fitted the plane for 2 faces of box ( upper faces and front face). You can see it in photo. The yellow plane is fitted on the upper face of box. It is good.

The red plane is fitted on front face of box. As you see there is on problem on that face. I thing it caused from sharp edge of the box. For some reason the camera see it radial.

image

image

image

Why the sharp edge seems like there was a radius there ? Is that caused from alignment or not ?

image

And on left and right bottom corners there are some points. These points is not valid for box. As you see the color is white. I think these are the texture of the surface of table. Why it is like this ?

This is how I get the point cloud data. Where self.w and self.h is the height and width of frame that is for my app 640x480

`

    XYZ = np.array([0]*(self.w*self.h*3),dtype='float')
    XYZ = np.asanyarray(XYZ).reshape(-1, 3)  

    i = 0
    for y in range (0,self.h,1):
        for x in range (0,self.w,1):
            depth = frame.get_distance(x,y)
            depth_point_in_meters_camera_coords = rs.rs2_deproject_pixel_to_point(self.color_intrinsics,[x,y],depth)
            XYZ[i] = depth_point_in_meters_camera_coords
            i = i + 1

    XYZ = np.asanyarray(XYZ).reshape(-1, 3)

    return XYZ

`

MartyG-RealSense commented 2 years ago

Whilst you certainly can use 640x480 resolution, the optimal depth resolution for accuracy on a D435 is 848x480, so 640x480 depth measurements may be more inaccurate.

If you are using get_distance() and depth-color alignment in your Python application then alignment is known to cause significant depth value inaccuracies at the outer regions of the image, whilst the values are relatively correct at the image's center area.

An example of a case that features this phenomenon is at https://github.com/IntelRealSense/librealsense/issues/6749#issuecomment-654185205 - please read downwards from the point that I have linked to in the discussion (skip past the script to the text underneath it):

If you are creating a pointcloud by performing depth to color alignment and then obtaining the 3D real-world point cloud coordinates with rs2_deproject_pixel_to_point (as indicated in your script above) then the use of alignment may also result in inaccuracies. Using points.get_vertices() instead to generate the point cloud and then store the vertices in a numpy array whose values can be printed out should provide better accuracy. This subject is discussed in detail in the Python case at https://github.com/IntelRealSense/librealsense/issues/4315

GHCK57 commented 2 years ago

Okey, I will use the 848x480 depth and colur resolution. What do you think about this one ? Is this same with rs2_deproject_pixelto point function ?


                 depth = frame.get_distance(x,y)
                 X = (x - self.color_intrinsics.ppx)/self.color_intrinsics.fx *depth
                 Y = (y - self.color_intrinsics.ppy)/self.color_intrinsics.fy *depth

And, I will use to get point cloud data.


                points = pc.calculate(depth_frame)                
                vertices = np.asarray(points.get_vertices(2)).reshape(-1, 3)  #XYZ
                texture =  np.asarray(points.get_texture_coordinates()).reshape(-1, 2) #UV

What about if I want to get the XYZ coordinates from the specified pixel coordinate ? Because I need it to calibrate the system. There is a "red circle" on the table. The opencv library detects the red circle from color frame of camera and it gives me the center of pixel coordinates of that red circle. After I get the pixel coordinates to calculate the XYZ world coordinate I was able to used rs2_deproject_pixelto point function. If rs2_deproject_pixelto point has bad accuracy how to I get the XYZ world coordinates from the pixel ?

MartyG-RealSense commented 2 years ago

Instead of using depth to color alignment on the entire image, you can retrieve the XYZ of a single specified coordinate without using alignment by using an SDK instruction called rs2_project_color_pixel_to_depth_pixel. https://github.com/IntelRealSense/librealsense/issues/5603 is a good reference for using this instruction in Python. The RealSense user in that case shares their final successful Python code at https://github.com/IntelRealSense/librealsense/issues/5603#issuecomment-574019008

GHCK57 commented 2 years ago

Okey thanks a lot for information.

Do these values following the mouse ​​in Realsense Viewer show us 3D XYZ points?

Is X= 0.091 Y = 0.014 Z = 0.738 ??

image

MartyG-RealSense commented 2 years ago

Yes it is showing the 3D real-world XYZ, with Z represented by real-world distance of the coordinate from the camera in meters.

GHCK57 commented 2 years ago

I have no idea about X and Y but I think Z value is wrong. I measured it. Viewer shows the center of red point as 0.76 meter but there is big difference with between measured by me.

image 1 2

MartyG-RealSense commented 2 years ago

On the cased 400 Series cameras, depth measurements are taken from the front glass of the camera. So the real-world tape measure distance from the camera to the observed point on the table would be approximately 72 cm / 0.72 m.

There is a closer match of 73.8 cm / 0.738 m on the image that you posted above.

image

The camera measurements will have a certain amount of error in them, as error starts at around zero at the camera lenses and increases linearly over distance (a phenomenon called RMS error). My understanding is that at 1 meter distance, a D435 would have an error factor of 2.5 mm to 5 mm.

Also, unless the IR emitter is enabled to project a dot pattern onto the scene then a plain untextured or low-texture surface such as the table would be more difficult for the camera to analyze for depth detail.

The High Accuracy preset can help to filter out obviously incorrect depth values.

The 400 Series stereo depth cameras do not have a 'confidence map' like the earlier SR300 model did, but you can achieve a similar effect using the Second Peak Threshold control, which in the Viewer can be found in the options side-panel under Advanced Controls > Depth Control > DS Second Peak Threshold. If the slider is reduced towards zero from its default value of '325' then you can observe holes in the image disappearing, whilst holes increase if the slider is increased above the default.

Intel's Depth Map Improvements for Stereo-based Depth Cameras on Drones white paper provides a definition for the threshold, which is also known as Second Peak Delta.


When analyzing the disparities of an area in the stereo images for a match, there might be one clear candidate indicating a large peak in terms of correlation. In some cases, multiple candidates could be viable at different peak levels. The second peak threshold determines how big the difference from another peak needs to be, in order to have confidence in the current peak being the correct one.


https://dev.intelrealsense.com/docs/depth-map-improvements-for-stereo-based-depth-cameras-on-drones#section-b-depth-camera-settings

GHCK57 commented 2 years ago

Thanks for advices. The camera to the observed point on the table would be approximately 72 cm / 0.72 m. But the camera tells us it is 0.76m not 0.73m. We can ignore the photo that posted by me before. It was a different point on table. I think it is huge difference between 0.76 and 0.72. Do the camera need the calibrated it ?

image

MartyG-RealSense commented 2 years ago

There is certainly no harm in calibrating the camera if you have doubts about the accuracy of the depth measurement values. It is worth bearing in mind that if On-Chip calibration is used then this can improve the quality of the image but a separate calibration option called Tare Calibration is used to improve depth accuracy.

image

GHCK57 commented 2 years ago

I was though the On-Chip Calibration is calibrate the depth and color . That is good news for me . It is good now.

image

MartyG-RealSense commented 2 years ago

That's great news, @GHCK57 :)

GHCK57 commented 2 years ago

Hi again @MartyG-RealSense

I think (I Hope) my problems are over. But I want to ask you a few questions.

How can I inject RGB color to XYZ point cloud data.

self.vertices = np.asarray(points.get_vertices(2)).reshape(-1, 3) #XYZ self.texture = np.asarray(points.get_texture_coordinates(2)).reshape(-1, 2) #UV

I used to use color frame to get color when color and depth frame alignment but now there is no alignment. Therefore I can not use color frame to get RGB colors. What should I do to find true color corresponding to that own pixel.

MartyG-RealSense commented 2 years ago

At https://github.com/IntelRealSense/librealsense/issues/1890 there is a Python case where someone was trying to map a depth pixel to a color pixel instead of mapping a color pixel to depth pixel using rs2_project_color_pixel_to_depth_pixel

GHCK57 commented 2 years ago

Hi again @MartyG-RealSense

I didn't solve the problem that posted in this https://github.com/IntelRealSense/librealsense/issues/10133#issuecomment-1008899208

At the points where the table surface meets the box, points are seen in the lower right and left corners of the box. These points in not related with the box. I think these points belong to the surface of table. Which configuration should be made to remove these points.

image

MartyG-RealSense commented 2 years ago

This problem would seem to have similarities to the case https://github.com/IntelRealSense/librealsense/issues/8306 which discusses options for tidying pointcloud data that you have already tried, such as changing the depth unit scale and the Second Peak Threshold.

The RealSense user in that case also achieved some improvement by using an Advanced Mode control in the RealSense Viewer called Rsm to reduce noise such as the grey areas.

image

The setting that they changed was Remove Threshold, altering it from the default of '63' to a higher value of '94'.

It is worth noting though that in another case at https://github.com/IntelRealSense/librealsense/issues/5477 the image was improved by reducing the Rsm threshold value to '0'. So I would suggest both increasing and decreasing to see which direction above or below the default provides the best improvement for your image.

In your Python project, the value of the Rsm Remove Threshold setting could be defined in a json using the param-rsmremovethreshold json parameter.

image

If your preference is to set the value with Python code instead of a json, the SDK example python-rs400-advanced-mode-example.py demonstrates setting an Advanced Mode vlaue.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/python-rs400-advanced-mode-example.py#L68

The SDK has an instruction called set_rau_thresholds_control()

https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.rs400_advanced_mode.html#pyrealsense2.rs400_advanced_mode.set_rau_thresholds_control

A RealSense user at the link below shared a Python script that makes use of this instruction.

https://community.intel.com/t5/Items-with-no-label/D415-in-Advanced-Mode/m-p/543308

image

import pyrealsense2 as rs
import time
DS5_product_ids = ["0AD1", "0AD2", "0AD3", "0AD4", "0AD5", "0AF6", "0AFE", "0AFF", "0B00", "0B01", "0B03", "0B07"]
def find_device_that_supports_advanced_mode() :
ctx = rs.context()
ds5_dev = rs.device()
devices = ctx.query_devices();
for dev in devices:
if dev.supports(rs.camera_info.product_id) and str(dev.get_info(rs.camera_info.product_id)) in DS5_product_ids:
if dev.supports(rs.camera_info.name):
print("Found device that supports advanced mode:", dev.get_info(rs.camera_info.name))
return dev
raise Exception("No device that supports advanced mode was found")
try:
dev = find_device_that_supports_advanced_mode()
advnc_mode = rs.rs400_advanced_mode(dev)
print("Advanced mode is", "enabled" if advnc_mode.is_enabled() else "disabled")
# Loop until we successfully enable advanced mode
while not advnc_mode.is_enabled():
print("Trying to enable advanced mode...")
advnc_mode.toggle_advanced_mode(True)
# At this point the device will disconnect and re-connect.
print("Sleeping for 5 seconds...")
time.sleep(5)
# The 'dev' object will become invalid and we need to initialize it again
dev = find_device_that_supports_advanced_mode()
advnc_mode = rs.rs400_advanced_mode(dev)
print("Advanced mode is", "enabled" if advnc_mode.is_enabled() else "disabled")
ctrls = advnc_mode.get_rau_thresholds_control()
print("RAU Thresholds Control: \n", ctrls)
ctrls.rauDiffThresholdRed = 200
ctrls.rauDiffThresholdGreen = 500
ctrls.rauDiffThresholdBlue = 1000
advnc_mode.set_rau_thresholds_control(ctrls)
print("After Setting new value, RAU Thresholds Control: \n", advnc_mode.get_rau_thresholds_control())
except Exception as e:
print(e)
pass
GHCK57 commented 2 years ago

Hi @MartyG-RealSense

Sorry for my late reply. I enable the advanced mode in my python project with below code.

where config_D435 is configuration to enable streams , pipe435 is pipeline (rs.pipeline())

        pipeline_profile = pipe_D435.start(config_D435)
        device = pipeline_profile.get_device()
        advanced_mode = rs.rs400_advanced_mode(device)
        advanced_mode.load_json(HighResHighAccuracyPreset.json)

Also I use "HighResHighAccuracyPreset.json"

{ "param-disableraucolor": 0, "param-disablesadcolor": 0, "param-disablesadnormalize": 0, "param-disablesloleftcolor": 0, "param-disableslorightcolor": 1, "param-lambdaad": 751, "param-lambdacensus": 6, "param-leftrightthreshold": 10, "param-maxscorethreshb": 2893, "param-medianthreshold": 796, "param-minscorethresha": 4, "param-neighborthresh": 108, "param-raumine": 6, "param-rauminn": 3, "param-rauminnssum": 7, "param-raumins": 2, "param-rauminw": 2, "param-rauminwesum": 12, "param-regioncolorthresholdb": 0.786380709066072, "param-regioncolorthresholdg": 0.5664810046339115, "param-regioncolorthresholdr": 0.9857413557742051, "param-regionshrinku": 3, "param-regionshrinkv": 0, "param-regionspatialthresholdu": 7, "param-regionspatialthresholdv": 3, "param-robbinsmonrodecrement": 25, "param-robbinsmonroincrement": 2, "param-rsmdiffthreshold": 1.6605679586483368, "param-rsmrauslodiffthreshold": 0.7269914923801174, "param-rsmremovethreshold": 0.8150280066589434, "param-scanlineedgetaub": 13, "param-scanlineedgetaug": 15, "param-scanlineedgetaur": 30, "param-scanlinep1": 155, "param-scanlinep1onediscon": 160, "param-scanlinep1twodiscon": 59, "param-scanlinep2": 190, "param-scanlinep2onediscon": 507, "param-scanlinep2twodiscon": 493, "param-secondpeakdelta": 647, "param-texturecountthresh": 0, "param-texturedifferencethresh": 1722, "param-usersm": 1 }

I implemented the code you posted. But still the problem is going on. Advanced mode is enable . The outpu is

Found device that supports advanced mode: Intel RealSense D435 Advanced mode is enabled RAU Thresholds Control: rauDiffThresholdRed: 1007, rauDiffThresholdGreen: 578, rauDiffThresholdBlue: 802 After Setting new value, RAU Thresholds Control: rauDiffThresholdRed: 200, rauDiffThresholdGreen: 500, rauDiffThresholdBlue: 1000

image

MartyG-RealSense commented 2 years ago

I ran extensive further tests with a D435i and a similar cardboard box. I was able to replicate the black areas shown in your image above. They seemed to be strongest when the camera was viewing the box from a horizontal-diagonal angle and receded when the camera was placed parallel to the box off to the side of it, looking straight ahead.

image

image

The D415 model produced a better image using this 'straight ahead and off to the side view', but even when the camera was turned at an angle towards the box it was better than D435's image.

image

GHCK57 commented 2 years ago

I don't suffer from black areas. My problem is different but I dont know how can I explain it.

The blue lines are different surface. The red circle is different surface. The green line shows the bottom boundary of box. The yellow lines are degraded depth data.

image

This is color frame.

image

GHCK57 commented 2 years ago

The color frame

image

Bottom view of the box. (point cloud data)

image

Front view of the box. ( point cloud data)

image

Different view

image

image

As you see, there are many points colored white. These points are belong to surface of table they are not belong to box. I hope I explained well.

Also this is a different point cloud from a different scan. There is a few points in this cloud. It can be ignorable. Long story short, sometimes we see these noisy spots, sometimes we don't.

image

MartyG-RealSense commented 2 years ago

The kind of point cloud edge trimming required may be beyond the SDK's default point cloud capabilities and require the use of specialist point cloud libraries such as PCL or Open3D to trim off unwanted detail with filters. For example, an outlier removal filter (which can remove unconnected 'islands' of points that are not joined to the main image) could likely deal with the corner blocks in the image above. The RealSense SDK has compatibility wrappers for PCL and Open3D to enable their functions to be accessed from a RealSense project.

GHCK57 commented 2 years ago

I already apply different kind of filters such as outline remover, KNN ,etc. It is not about this. I think there is a problem but I actually dont know what it is. Because if you look picture in this comment https://github.com/IntelRealSense/librealsense/issues/10133#issuecomment-1014359703 . There is a depth distortion. Is it caused due to that reason.

GHCK57 commented 2 years ago

I will share one more example. I put the 2 screwdrivers on side of the box.

image

If we look at the depth frame it looks like this.

image

image

I would expect space between the screwdriver and the box. However, the camera measures depth in that range. It fills in between the screwdriver and the box and sees it at the same distance with the box. Something seems to have gone wrong.

MartyG-RealSense commented 2 years ago

Is the curved-wall backdrop on top of the table removable from the table in order to eliminate the wall's curvature as a possible source of confusion for the camera when depth sensing?

GHCK57 commented 2 years ago

When the curved wall has removed , the depth frame is ( "custom" preset)

image

image

the depth frame is (high accuracy preset)

image

I uploaded a screen record

video.zip

I check why the black hole bigger out. This cause due to the lights. When I check the infrared stream, I didn't saw the infrared dot pattern in that area due to light. The light is not strong.

The infrared stream is while the lights are turn on

image

The infrared stream is while the lights are turn off

image

The depth frame is while the lights are turn off.

image