Closed SarojDebnath closed 1 year ago
Hi @SarojDebnath Coordinates may register a depth of zero if there is no depth information at those coordinates. Even if there is a solid object present at those coordinates in the real world, the camera may not have been able to detect depth at that particular surface area. Reasons for this could include the area having a reflective surface or being colored dark grey / black. In the case of dark grey / black surfaces, if they are not reflective then casting a strong light source onto that area can help to bring out depth information from them.
Areas without depth information may appear as black, giving the impression that there is data there when in fact it is simply empty space on the image. An example would be scanning a black cable, which could result in a cable-shaped area on the image of empty space without depth data within it.
Hi @MartyG-RealSense , I agree with you at the possible reason that you have mentioned. However, I would like to know if there are some ways by which we can determine how much intense is the lighting required? Or it's just by trials that we determine the conditions for the best result?
It would likely depend on the material type and properties of the particular surface that is being observed and so would require tests to find the optimum lighting conditions.
If it is a reflective surface then it is possible to dampen the glare from reflections to enable it to be more easily readable by the camera. This could be done by fitting a thin-film polarizing filter product over the lenses on the outside of the camera, or applying a fine spray-on powder (such as baby powder or foot powder) to the surface or using a professional 3D modelling reflection-damping aerosol spray product (such as those used for taking pictures of jewelry for a catalog).
Thank you @MartyG-RealSense for your super fast reply. It solved some of my doubts. Can you please also mention few techniques to achieve the best depth result. I have tried to fine tune using the documentation available at the realsense but its not up to the mark.
Conditions is: My Camera is always focusing from same position and at the similar object with little variation in position.
If the lighting level at the camera's location is consistent all day (such as an indoor room with artificial lighting) then you may find it beneficial to disable auto-exposure and use a fixed manual exposure value that does not vary.
If an observed object is thin (such as a pen or toothpick) then changing the camera's depth scale from its default value of '0.001' to 0.0001 may help to fill in holes in the image, as described at https://github.com/IntelRealSense/librealsense/issues/8228
If the D435i camera is further than 3 meters from the observed surface then depth error will increase the further away that the surface is (as 3 meters and beyond is the point on the D435i model where the error becomes noticable). This is due to a phenomenon called RMS Error, where error increases linearly over distance as an observed object becomes further away from the camera.
There may also be depth error if the observed surface is closer to the D435i camera than its 0.1 meters / 10 cm minimum depth sensing distance. Increasing the camera's Disparity Shift value to '100' instead of the default '0' will reduce the camera's minimum distance and enable it to get closer to surfaces, at the expense of the maximum observable depth distance being reduced.
If the surface has similar looking repeating patterns horizontally and vertically, such as floor / ceiling tiles, then this can confuse the camera's depth sensing. A guide to reducing this 'repetitive pattern' negative effect can be found at the link below.
https://dev.intelrealsense.com/docs/mitigate-repetitive-pattern-effect-stereo-depth-cameras
Thank you for the informations.
Issue Description
Here, I have tried to use grab cut algorithm and then retrieve the 3-D coordinates using pixel and depth information. However, few of the points in the image is having a depth information of 0 which means (0,0,0) as world coordinates. This behaviour is strange because the all of the depth data are integrated. I searched a lot at the closed and open issues of the repository but couldn't find any fruitful solution. How can I solve it?
OUTPUT: