Closed laolaolulu closed 5 years ago
I was not certain from your question whether you are sensing at close range or long range, so I will talk about both cases.
At longer range, the depth accuracy will drift noticeably after 3 to 4 meters away from the camera. This is called RMS Error. It impacts the D435 model more than the D415 model due to the D435's hardware design. The link below has a chart that shows RMS error over distance for D415 and D435.
https://communities.intel.com/message/559606#559606
At very close range, the camera may also have less accuracy than the earlier SR300 camera model. This is because of how the 400 Series is configured to be able to 'see' up to 65 meters. It is possible to adjust the depth units of the camera to more closely match the SR300.
The link below explains this:
https://communities.intel.com/message/551089#551089
As an alternative to changing the depth scale, you may get better accuracy at close range by reducing the resolution.
We need to detect objects 20 centimeters away from the camera, that is to say, if I change the SR300 measurement will be better?
You would need to reduce the minimum sensing distance of the camera (MinZ). This can be done using the Disparity Shift setting. You can reduce minimum distance (how close the camera can get to an object) at the cost of reducing the maximum distance (MaxZ) that the camera can see.
For example, according to Intel's camera tuning guide, a Disparity Shift of '0' gives a MinZ of 45 cm and an infinite MaxZ, whilst setting the shift to '50' gives a MinZ of 30 cm and a MaxZ of 110 cm.
Using a D435 as well. With a distance of 20cm, I found that I need to apply "50.0" to the Disparity Shift Value for optimal results
How do I use Python to set up disparity shift?
`# Configure depth and color streams//配置深度和颜色流 pipeline = rs.pipeline() config = rs.config() config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 30) config.enable_stream(rs.stream.color, 1920, 1080, rs.format.bgr8, 30)
profile = pipeline.start(config) align = rs.align(rs.stream.color)`
I personally used C++ with Realsense, not python. But here is what I found.
python-rs400-advanced-mode-example.py
The area you should be interested in:
device = rs.context().query_devices()[0]
advnc_mode = rs.rs400_advanced_mode(device)
depth_table_control_group = advnc_mode.get_depth_table()
depth_table_control_group.disparityShift = 128
advnc_mode.set_depth_table(depth_table_control_group)
[Realsense Customer Engineering Team Comment] @laolaolulu 'another way to have smaller minZ is to reduce the depth resolution
Hello, I'm experimenting with a D435 camera. Someone has performed tests varying the angle of inclination of the camera with respect to the ground. When the camera is placed perpendicular to the floor the error corresponds to the expected but when I tilt the camera the error is much higher. Best regards Dibet
How close to the floor is the camera placed, please?
RealSense 400 Series cameras can be pointed downwards as well as straight ahead (90 degrees) without problems, as long as objects are not below the camera's minimum depth sensing distance (MinZ).
So if the camera is located near the floor and is then angled downwards, I can imagine a situation where the floor might be below the camera's MinZ) . When an object is below MinZ, the image starts progressively breaking up the closer that the camera gets to the object.
Hi, the camera is located at 2,5 mts to the floor, the inclination angle is 40 degree approximately. Iam looking for people walking in the camera FOV. I measure the distance from the camera to the ground (or walls) in many points and i have an error more biggest than the expected.
It sounds as though the floor or people would not be going under MinZ when the camera is mounted that high, so MinZ is likely not the cause of problems with your measurements.
When the camera is facing straight forwards (perpendicular), I would imagine that if the D435 camera is mounted at 2.5 m above the floor, the main things that it would see are walls / doors that are in front of it, and the top of peoples' heads.
I can foresee that you might want the camera to point 40 degrees downwards so that people could not avoid detection by the camera's view by crouching down as they walked (if they were an intruder).
Could you give an indication please of the measurement values that you expect and the actual measurement that you are getting when the camera is angled down, please?
Edit: I can imagine that people who are further away from the camera might have greater error in their measurements than people who are closer. Because 'RMS error' in the measurements increases as the distance of an observed object from the camera increases (and the D435 model's RMS error increases over distance at about twice the rate of the D415 model), a person who approaches the camera from a distance would have a higher error than when they get closer to the camera.
So if the camera tracks a person all the way within its 10 m sensing range, I would expect their measurement to have a noticably high error factor at 10 m away and the error to decrease towards zero as they approach the camera's mounting position.
In the image we can see the depth measures in some points. The real measure are Point P: 2945 ( camera measure : 2740 ) Point Q: 3301 ( camera measure : 2900 ) Point R: 5167 ( camera measure : 4817 ) Point Green : 4430 ( camera measure : 4567 ) Point Blue (middle of the bottom) : 2874 ( camera measure : 2623 ) The inclination angle is 26 degree and the camera height is 2085 for this test. The camera are using the defaults parameter (best results).
Thank you very much for your results. I considered them carefully. I can see that points P, Q and Blue, which are parallel to each other, have roughly the same ratio of error (about 200 mm difference). The 'R' and 'Green' objects at the back of the room, meanwhile, are parallel but have noticably different error ratios (about 120 for 'Green' and about 350 mm for 'R'. This could be because further-away objects will be harder for the camera to read well, particularly if they are small.
If the purpose of the project is to detect the presence of people and trigger in response though, the variations in depth measurement may not be so important if the goal is to detect something instead of measuring it,
Also one must take into consideration the 'wavy' effect from the D435. At distances beyond 3 or 4m, the 'waves' are more significant. If you monitor the depth value of the same pixel for a duration you will see a sinusoidal pattern. The average over time would represent a better measure.
While running the SLAM algorithm using RGB-D sensor, I get some depth error. For example, I get an average of 10 cm error when I move 8 meters. I think this error is due to the rms error of the RealSense d435-i camera that I am using I want to know how this error changes with distance, i.e. does the error increase more at short distance or long distance, and if the error increases by how much at each distance? Can we read the rms error coming from the camera in real time? Because we want to read this error value and give it as a parameter to the optimization.
Hi @thomCastillo RMS error is related to the distance of an observed object from the camera, not how far the camera has travelled. The error increases linearly as an observed object's distance from the camera increases. For example, an object that is 1 meter from the camera may have an error factor of 2.5 mm to 5 mm in its measured distance from the camera.
Point 5 of the section of Intel's camera tuning paper linked to below provides more information about RMS error.
The RealSense SDK's Depth Quality Tool demonstrates live feedback about RMS error.
https://github.com/IntelRealSense/librealsense/tree/master/tools/depth-quality
Hi @MartyG-RealSense Thanks for reply. I get an average of 10 cm error when I move 8 meters?Could this error be caused by the RealSensed435i camera? If it is caused by the camera, what kind of error is this?
I do not think this is related to RMS error if the camera is travelling 8 meters. It could be said to be RMS if the camera was stationary and the object whose distance is being measured had moved a distance of 8 meters away from the camera.
If you are performing SLAM then it sounds like the kind of measurement errors experienced during mobile robot navigation in the RealSense T265 Tracking Camera case in the link below.
Performing relocalization with whatever SLAM tool you are using (e.g RTABMAP) can correct drift in positional measurements that have developed during travel.
Could we read the error coming from the RealSenseD435-i camera in real time and give this error to optimization?
What is the accuracy of realsense (D435)? I use D435 to measure the depth value of 2 mm error, is it necessary to calibrate or belong to normal?