Closed andreaceruti closed 2 years ago
Hi @andreaceruti Assuming that the bags are recorded under the same lighting conditions, the bag with the problem gives me the impression that the infrared sensors may have been exposed to a lighting event (such as a flash of sunlight directly into the camera's view) that over-saturated the sensor. An example of the kind of effect on the depth image that can be caused by such an event is shown at https://github.com/IntelRealSense/librealsense/issues/8880
In the particular case linked to above though, the RealSense user found that the problem never occurred if using the official USB cables supplied with the camera, but could occur randomly if using their own choice of USB cable. Purchasing a high quality brand of USB cable resolved their problem permanently.
Hi @andreaceruti Do you require further assistance with this case, please? Thanks!
Hi @MartyG-RealSense sorry for the inactivity. I can close the issue now. Lastly I would like to ask if you know some trick (apart using neural networks) or if there is some known method to overcome this problem. In the depth image there are a lot of zero pixel values and so trying some methods as histogram matching is ineffective since the value distribution of the pixels impacts the algorithm
Did you record the bags in ROS using rosbag record please? I ask this because bags recorded in the RealSense SDK (librealsense) and SDK tools such as the RealSense Viewer do not save aligned data. Instead, depth and color are saved to the bag as non-aligned separate streams. When the bag file is read, a librealsense script can then perform depth-color alignment on the two streams in real-time to generate an aligned image.
Looking at the images at the top of this case that you expect to receive, they seem to be depth-only streams with no RGB data. The color shading in the images is not RGB. Instead, it is depth coordinates that have been given a particular color depending on the depth value (the distance of a particular coordinate from the camera).
If you are aiming to replicate those images when reading the bag then you should only need to read a pure depth topic if it has been saved in the bag, such as /camera/depth/image_rect_raw
@MartyG-RealSense Yes, I used rosbag record. I think I have saved the aligned depth since the ros topics I have in my bags are:
/d435i/aligned_depth_to_color/camera_info 7040 msgs : sensor_msgs/CameraInfo
/d435i/aligned_depth_to_color/image_raw 7040 msgs : sensor_msgs/Image
/d435i/color/camera_info 7299 msgs : sensor_msgs/CameraInfo
/d435i/color/image_raw 7299 msgs : sensor_msgs/Image
In this case I have just reported in this issue only depth frames, the RGB ones are correct.
The corrupted depth frames came from /d435i/aligned_depth_to_color/image_raw and I was wondering if, since I have values of correct depth frames coming from other bags, I can modify the corrupted frames with some algorithm.
I looked over the case carefully again. Thanks very much for your patience.
If you are using the RealSense ROS wrapper to publish the camera topics, a way to help avoid the loss of depth information may be to load a json camera configuration file as part of the ROS launch and set a parameter in the json file called param-secondpeakdelta to '0' instead of the default of '325'.
This will make it less likely that a depth coordinate will be excluded from the image if there is doubt about whether it is an inaccurate depth value for that particular coordinate.
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
I have collected data from a D435i camera in different days, storing all in different bag files. Then I have downloaded the rgb (/color/image_raw) and depth (/aligned_depth_to_color/image_raw) topics from the bags and I get an unexpected behavior when trying to visualize the depth images. This is the depth I am expecting, where I can clearly see the depth difference for every pixel. (On the left an enhanced version of the image, and on the right the same image but with more strong colours
Then I have a bag in which the depth is saved in this way. I have checked the rostopics and I can't see a difference from different bags. If i analyze the image I can see that in this bag most of the pixels seem to near, while instead I know that I should obtain a result very similar as before.
Can someone explain to me what could be the problem? I am pretty new to this type of data and I can't understand what I am missing since the whole procedure to get the data is the same.