Closed BarzelS closed 4 years ago
The newest RealSense D455 model has an RGB sensor that has a fast global shutter for the first time like the shutter on the depth sensing. Previous 400 Series models had a slower rolling shutter on the RGB sensor that made color images prone to blur during capture of fast motion.
https://github.com/IntelRealSense/librealsense/issues/6610
https://github.com/xdspacelab/openvslam
https://dev.intelrealsense.com/docs/imu-calibration-tool-for-intel-realsense-depth-camera
- I would recommend choosing only a 400 Series depth camera. These contain a Vision Processor D4 circuit board that can make real-time auto exposure adjustments based on current lighting conditions.
The newest RealSense D455 model has an RGB sensor that has a fast global shutter for the first time like the shutter on the depth sensing. Previous 400 Series models had a slower rolling shutter on the RGB sensor that made color images prone to blur during capture of fast motion.
IntelRealSense/librealsense#6610
- If the IMU is a concern for you, it may be worth pairing a 400 Series camera with the OpenVSLAM Visual SLAM software. It is not dependent on an IMU and it is extremely customisable in regard to the camera hardware that it can work with.
https://github.com/xdspacelab/openvslam
- The IMU component in RealSense cameras does not have internal calibration, and so may not have zero acceleration values even when the camera is motionless at idle. This issue can be addressed on the D435i IMU by performing a software calibration on the IMU.
https://dev.intelrealsense.com/docs/imu-calibration-tool-for-intel-realsense-depth-camera
If you have the D435i's projector enabled, the semi-random dot pattern projected by it can aid the camera in analysing the depth information of surfaces with low texture or no texture, as the dots act as a texture source.
Have you tried Intel's tutorial for IMU equipped SLAM on D435i, please?
https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i
I note that you have tried VINS-FUSION. There is also VINS-RGBD that can be used with D435i.
https://github.com/STAR-Center/VINS-RGBD
A research paper has been published about its use with D435i.
If you have the D435i's projector enabled, the semi-random dot pattern projected by it can aid the camera in analysing the depth information of surfaces with low texture or no texture, as the dots act as a texture source.
Have you tried Intel's tutorial for IMU equipped SLAM on D435i, please?
https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i
I note that you have tried VINS-FUSION. There is also VINS-RGBD that can be used with D435i.
https://github.com/STAR-Center/VINS-RGBD
A research paper has been published about its use with D435i.
Yes I tried the opensource_tracking launch file and got large drift immediate in the beginning. I think something has changed with rtabmap or with realsense that it doesn't work anymore, have you tried it with the new realsense versions?
Yes I've tried VINS-RGBD and got the same bad results.
As I said it seems that there is a problem with my IMU usage, normal VO(no IMU) approaches seems to work fine until I'm getting to low textured scenes but when I combine the IMU(VIO approaches) I'm getting this drift, I'm suspecting that it something related to things that have changed with both realsense-ros and librealsense version regarding the IMU.
I'm not aware of changes made to how the D435i's IMU is handled. I recall that you have been having issues with your IMU in regard to calibration in another case that you and I have been going through.
https://github.com/IntelRealSense/realsense-ros/issues/1286
In regard to your question in that case about how you could not get yaw: a RealSense team member says that pitch and roll are supported in the rs-motion example. In the link below, a RealSense user programmed their own Python mechanism for yaw.
https://github.com/IntelRealSense/librealsense/issues/4391#issuecomment-510369366
I'm not aware of changes made to how the D435i's IMU is handled. I recall that you have been having issues with your IMU in regard to calibration in another case that you and I have been going through.
1286
In regard to your question in that case about how you could not get yaw: a RealSense team member says that pitch and roll are supported in the rs-motion example. In the link below, a RealSense user programmed their own Python mechanism for yaw.
Whilst it is extremely rare for RealSense hardware components to fail, it is not impossible. Going through both of your cases though (this one and #1286) I can't see anything that strongly suggests an IMU component failure. And the IMU is not likely to have a direct effect on whether a surface is recognised if it is low-textured or the illumination level is low.
You of course have the option of exploring pairing your depth camera with a T265. If you wish to discuss that with a RealSense support team member, please close this case and start a new one, as somebody else on the RealSense team handles T265 related cases.
Whilst it is extremely rare for RealSense hardware components to fail, it is not impossible. Going through both of your cases though (this one and #1286) I can't see anything that strongly suggests an IMU component failure. And the IMU is not likely to have a bearing on whether a surface is recognised if it is low-textured or the illumination level is low.
You of course have the option of exploring pairing your depth camera with a T265. If you wish to discuss that with a RealSense support team member, please close this case and start a new one, as somebody else on the RealSense team handles T265 related cases.
Its not that the IMU is related to wheter a surface is recognized if it is low-textured or the illumination level is low, its the fact that it gives a continuous evaluation of the movement in the short term in scenarios where the visual odometry gets lost
I see. I understand what you mean now. However, this is getting into topics that I am not able to provide support about. So you are more likely to get the answers that you need if you close this case and start a new case about IMU navigation specifically so that a member of the RealSense team who can handle such cases can assist you.
Please make sure to include mention of the T265 in the title or the opening question if you need T265 information, so that Intel know that the case will not be handled by me. I apologise that I could not be of more help in this particular instance.
I'm not aware of changes made to how the D435i's IMU is handled. I recall that you have been having issues with your IMU in regard to calibration in another case that you and I have been going through.
1286
In regard to your question in that case about how you could not get yaw: a RealSense team member says that pitch and roll are supported in the rs-motion example. In the link below, a RealSense user programmed their own Python mechanism for yaw.
I think I found something that has changed along the realsense versions, I played some old bag from the netwrok that was recorded with the D435i(https://star-center.shanghaitech.edu.cn/seafile/d/0ea45d1878914077ade5/ - Handheld Normal.bag) and then "rostopic echo /camera/imu"
Why you did this change? is there any way I can publish the imu in the camera_imu_frame and not in the new frame?
Your question above is something that I do not have knowledge of (I am on the Support team, not the development team). @doronhi Could you assist @shirbarzel with the question above please? Thanks!
@shirbarzel Whilst answering another RealSense user's SLAM question, I recalled a D435i compatible SLAM software by MIT SPARK Lab called Kimera and found the link to the details.
At 11 minutes 25 seconds into the YouTube video at the above link, it says that Kimera can be installed without ROS, though it takes longer (30 minutes) and more skill than an easy ROS based installation (5 minutes to begin compilation, 20 minutes to complete).
@shirbarzel Whilst answering another RealSense user's SLAM question, I recalled a D435i compatible SLAM software by MIT SPARK Lab called Kimera and found the link to the details.
At 11 minutes 25 seconds into the YouTube video at the above link, it says that Kimera can be installed without ROS, though it takes longer (30 minutes) and more skill than an easy ROS based installation (5 minutes to begin compilation, 20 minutes to complete).
As you can see here: https://github.com/MIT-SPARK/Kimera-VIO-ROS/issues/67 I've tried this package as well and had a lot of problems but thanks.
Your question above is something that I do not have knowledge of (I am on the Support team, not the development team). @doronhi Could you assist @shirbarzel with the question above please? Thanks!
Hi @MartyG-RealSense @doronhi, any updates?
@shirbarzel There is nothing new that I know of to add on that question, unfortunately.
I will close this case now, as there is unfortunately no further information that can be provided.
Hi, For some time now I have been trying to find a solution to the problem of visual-inertial odometry in low-textured and dynamic illumination environments, I have tried several methods in combination with the D435i but I see that in practice there is no good solution for this problem.