IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.53k stars 4.81k forks source link

depth images are different #10430

Closed andreaceruti closed 2 years ago

andreaceruti commented 2 years ago

Required Info
Camera Model { R200 / F200 / SR300 / ZR300 / D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win (8.1/10) / Linux (Ubuntu 14/16/17) / MacOS
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/Raspberry Pi/ NVIDIA Jetson / etc..
SDK Version { legacy / 2.<?>.<?> }
Language {C/C#/labview/nodejs/opencv/pcl/python/unity }
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

I have collected data from a D435i camera in different days, storing all in different bag files. Then I have downloaded the rgb (/color/image_raw) and depth (/aligned_depth_to_color/image_raw) topics from the bags and I get an unexpected behavior when trying to visualize the depth images. This is the depth I am expecting, where I can clearly see the depth difference for every pixel. (On the left an enhanced version of the image, and on the right the same image but with more strong colours

Then I have a bag in which the depth is saved in this way. I have checked the rostopics and I can't see a difference from different bags. If i analyze the image I can see that in this bag most of the pixels seem to near, while instead I know that I should obtain a result very similar as before.

Can someone explain to me what could be the problem? I am pretty new to this type of data and I can't understand what I am missing since the whole procedure to get the data is the same.

MartyG-RealSense commented 2 years ago

Hi @andreaceruti Assuming that the bags are recorded under the same lighting conditions, the bag with the problem gives me the impression that the infrared sensors may have been exposed to a lighting event (such as a flash of sunlight directly into the camera's view) that over-saturated the sensor. An example of the kind of effect on the depth image that can be caused by such an event is shown at https://github.com/IntelRealSense/librealsense/issues/8880

image

In the particular case linked to above though, the RealSense user found that the problem never occurred if using the official USB cables supplied with the camera, but could occur randomly if using their own choice of USB cable. Purchasing a high quality brand of USB cable resolved their problem permanently.

MartyG-RealSense commented 2 years ago

Hi @andreaceruti Do you require further assistance with this case, please? Thanks!

andreaceruti commented 2 years ago

Hi @MartyG-RealSense sorry for the inactivity. I can close the issue now. Lastly I would like to ask if you know some trick (apart using neural networks) or if there is some known method to overcome this problem. In the depth image there are a lot of zero pixel values and so trying some methods as histogram matching is ineffective since the value distribution of the pixels impacts the algorithm

MartyG-RealSense commented 2 years ago

Did you record the bags in ROS using rosbag record please? I ask this because bags recorded in the RealSense SDK (librealsense) and SDK tools such as the RealSense Viewer do not save aligned data. Instead, depth and color are saved to the bag as non-aligned separate streams. When the bag file is read, a librealsense script can then perform depth-color alignment on the two streams in real-time to generate an aligned image.

Looking at the images at the top of this case that you expect to receive, they seem to be depth-only streams with no RGB data. The color shading in the images is not RGB. Instead, it is depth coordinates that have been given a particular color depending on the depth value (the distance of a particular coordinate from the camera).

If you are aiming to replicate those images when reading the bag then you should only need to read a pure depth topic if it has been saved in the bag, such as /camera/depth/image_rect_raw

andreaceruti commented 2 years ago

@MartyG-RealSense Yes, I used rosbag record. I think I have saved the aligned depth since the ros topics I have in my bags are:

/d435i/aligned_depth_to_color/camera_info 7040 msgs : sensor_msgs/CameraInfo
/d435i/aligned_depth_to_color/image_raw 7040 msgs : sensor_msgs/Image
/d435i/color/camera_info 7299 msgs : sensor_msgs/CameraInfo
/d435i/color/image_raw 7299 msgs : sensor_msgs/Image

In this case I have just reported in this issue only depth frames, the RGB ones are correct.

The corrupted depth frames came from /d435i/aligned_depth_to_color/image_raw and I was wondering if, since I have values of correct depth frames coming from other bags, I can modify the corrupted frames with some algorithm.

MartyG-RealSense commented 2 years ago

I looked over the case carefully again. Thanks very much for your patience.

If you are using the RealSense ROS wrapper to publish the camera topics, a way to help avoid the loss of depth information may be to load a json camera configuration file as part of the ROS launch and set a parameter in the json file called param-secondpeakdelta to '0' instead of the default of '325'.

This will make it less likely that a depth coordinate will be excluded from the image if there is doubt about whether it is an inaccurate depth value for that particular coordinate.