Open yijionglin opened 5 days ago
A side question:
I have seen a few people post their snips of code (example 1, example 2) in the issues which use:
depth_point_ = rs.rs2_project_color_pixel_to_depth_pixel(
depth_frame.get_data(), depth_scale,
depth_min, depth_max,
depth_intrin, color_intrin, depth_to_color_extrin, color_to_depth_extrin, color_point)
However, according to the API document, the order of the depth_to_color_extrin and color_to_depth_extrin is swapped.
I stick with the input argument order as indicated in the document and it works better than the inverted one. But I am not sure if this is the cause of my aforementioned problem.
Here is another example figure of my problem, but it is not that notable as the segmented area is larger.
Hi there,
If possible, could I get any suggestions on this bit?
Best,
Hi @yijionglin The depth colorization on your images appears to be reversed. Usually depth is colored from blue (near range depth values) to red (far range depth values). In your images though, the near range is represented by red and the far range by blue. The link below has resources about setting the colorizer color scheme with Python code.
https://support.intelrealsense.com/hc/en-us/community/posts/1500000949421-D435i-color-scheme-problem
If you do not mind your depth image being colored this way though then you do not need to adjust it.
The depth image of the white machine does seem to be very distorted though and the black component with a vertical shaft has a black shadow behind it.
It is a general physics principle (not specific to RealSense) that dark grey or black absorbs light and so makes it more difficult for depth cameras to read depth information from such surfaces. The darker the color shade, the more light that is absorbed and so the less depth detail that the camera can obtain. Increasing the strength of the external light illumination cast onto the black area can help to bring out depth information from it.
If you have access to the RealSense Viewer tool then you could try resetting your camera to its default factory-new calibration using the instructions at https://github.com/IntelRealSense/librealsense/issues/10182#issuecomment-1019854487 to see whether that makes a difference.
Hi @MartyG-RealSense ,
Thanks for your reply! I will try to reset the camera.
I have already increased the brightness of illumination but it doesn't help.
When you said black shadow behind the bolt-shape object, do you mean the shadow on its left or right? The pixel mapping problem always happens on the right side where there is not much shadow.
Do you think if there is any problem in my code by any chance?
Many thanks,
What confused me most is the majority part of the pixels mapped correctly, only the area on the right side shifted...
You could test whether there is a problem in your code by running the RealSense SDK's rs-align depth-color alignment example program. If you installed the SDK with examples and tools included then a pre-built executable version of rs-align should be in the Ubuntu directory usr/local/bin
Regarding the black shadow, I mean the one behind the orange shaft.
You could test whether there is a problem in your code by running the RealSense SDK's rs-align depth-color alignment example program. If you installed the SDK with examples and tools included then a pre-built executable version of rs-align should be in the Ubuntu directory usr/local/bin
Regarding the black shadow, I mean the one behind the orange shaft.
Hi @MartyG-RealSense , thanks for the suggestions!
I have just tried the RealSense SDK's rs-align depth-colour alignment example program and here is the screenshot, how could we know if it is well aligned or not? Here are the two images with color and depth references (by clicking the box down below)
Here is the screenshot I took when running with the program named 'rs-align-advance'
I found something interesting when I ran the official example program named 'rs-pointcloud'. It is very clear that some area of the orange shaft on its right is mismatched to the background. In this case, I can probably assume my code should be no problem as they behave exactly the same?
After the calibration reset process following the indication you shared, I have also played with the 3D view option in the realsense-viewer, where this phenomenon still exists. Here is a short video for it. (The illumination condition is fairly good so I think the problem should lie in the camera itself.) By the way, the orange shaft is about 60mm tall, which is small and I am not sure if it introduces this kind of problem.
I have also check this document for the rs-pointcloud
program, and I think the problem should be in this line: vertices = points.get_vertices()
. The depth information is not correct on the orange shaft's right edge.
I also noticed the same issue in this youtube video.
Thanks very much for the testing and images. They do seem to indicate that the problem is not in your code.
Does the Viewer image improve if you change the 'Depth Units' setting at Stereo Module > Controls > Depth Units from its default of 0.001 to 0.0001 please?
If it does make a difference, you can change the depth units in your program script with the rs.option.depth_units instruction.
profile = pipeline.start()
depth_sensor = profile.get_device().first_depth_sensor()
if depth_sensor.supports(rs.option.depth_units):
depth_sensor.set_option(rs.option.depth_units, 0.0001)
Hi @MartyG-RealSense ,
Thanks for your reply! I have tried the value you suggested and it seems it doesn't make any difference for this issue. By the way, what does the depth unit mean? Is it similar to the resolution?
It is a difficult setting to explain, and one that does not really need an explanation in order to use it. It relates to the number of meters per one increment of depth. 0.001 would indicate millimeter scale, while 0.01 would indicate centimeter scale.
The simplest way to think about it is that when observing an object at relatively close range, changing the depth scale to 0.0001 can sometimes reduce the size of the black shadow around an object.
Issue Description
Hello there,
I have tried to map the segmentation area in the color image to the depth image. However, I found there is some area shifted in the depth image.
It looks like a problem with the integers of the depth pixels (the
depth_pixel
generated fromproject_color_pixel_to_depth_pixel
is a list with two float values), here is the snippet of my code:And here are the original color/depth images:
And here are the segmented color/depth images (red are in the color image and white area in the depth image):
Could you please give me some suggestions on this?
Many thanks, Bourne