Closed Shrioe closed 4 months ago
Hi @Shrioe After the D435i SLAM guide was published, the gyro and accel IMU topics were later set to disabled by default in the RealSense ROS wrapper. Please try enabling the topics using the launch instruction below to see whether it makes a difference to the errors that you are receiving.
roslaunch realsense2_camera opensource_tracking.launch enable_accel:=true enable_gyro:=true unite_imu_method:=linear_interpolation
Hello,
Thank you for your reply! Just to let you know, I'm using a Raspberry Pi 4B with Ubuntu 20.04.
For ypur information after implementing the code you mentioned above the errors are still there, and the rviz will close by itself automatically after launching for few seconds.
Could you please provide some suggestions?
Thank you so much!
At https://github.com/IntelRealSense/realsense-ros/issues/2494 another RealSense ROS user with Ubuntu 20.04 experienced Critical Error, NaNs were detected in the output state of the filter
on a PC, so the cause may not be that you are using a Pi 4 board. SLAM in general is a processing-intensive activity, so this error could occur even on a full-size computer.
Looking at the full list of errors, most of them look like what I woud call 'cascade errors' caused by the first NaN error. In other words, if the initial 'Critical Error: NaNs were detected' error had not occurred then the subsequent errors that refer to NaN likely would not occur.
The 'Failed to meet update rate!' warning might also be occurring because of the Critical Error.
Does stability improve if you use unite_imu_method:=copy
in your launch instruction instead of setting it to linear_interpolation.
As you are using a Pi 4b, that could also be a cause of the errors because RealSense cameras usually have problems with accessing IMU data when used with Raspberry Pi boards.
Hello @MartyG-RealSense , by referring to #2494, i tried to edit the rs_camera.launch file as shown below:
But when I run "roslaunch realsense2_camera opensource_tracking.launch enable_accel:=true enable_gyro:=true unite_imu_method:=linear_interpolation", there is no more NaN values errors but still: " 15/06 14:26:33,788 WARNING [281472450236784] (messenger-libusb.cpp:42) control_transfer returned error, index: 300, error: Resource temporarily unavailable, number: b [ WARN] [1718432786.188037476]: Failed to meet update rate! Took 0.0067153130000000005317 15/06 14:26:33,890 WARNING [281472450236784] (messenger-libusb.cpp:42) control_transfer returned error, index: 300, error: Resource temporarily unavailable, number: b
In rviz,
There is no map shown. I cannot tick the "map" or "camera_link" because under "frame", there is only "All enabled". Then, under "image", there is no image shown even though i changed to "cameracolor/image_raw"
There is a past case of the opensource_tracking.launch file at the link below that experienced the same warnings that you had. In their particular case the problem was apparently caused by the USB port that they were using the camera on.
https://support.intelrealsense.com/hc/en-us/community/posts/8338902895379-D435I-IMU-ISSUE
For another user at https://github.com/IntelRealSense/realsense-ros/issues/2386#issuecomment-1179073597 they found that the problem was caused by having installed librealsense twice by different methods.
I using the cable given while I bought the d435i, and I have check the USB device details:
After checking, I found that the camera will only has issue (no showing image), when I change code in rs_camera.launch to "arg name ="Enable gyro" value="true" arg name ="Enable accel" value="true" "
I have tried solutions from #2494, #2386, and https://support.intelrealsense.com/hc/en-us/community/posts/8338902895379-D435I-IMU-ISSUE, already, since the image cannot appear if I modify enable_gyro from false to true, I only adjusted the fps value only, and when I was trying to run "roslaunch realsense2_camera opensource_tracking.launch enable_accel:=true enable_gyro:=true unite_imu_method:=copy.", it looks like everything revert back tk my initial problems which is the "NaN error", and now the rviz will close automatically by itself after it pop out for few seconds only.
But if I run "roslaunch realsense2_camera opensource_tracking.launch enable_accel:=true enable_gyro:=true unite_imu_method:interpolation _linear", then rviz will appear, just that there's still no map shown . It will only form the map of the first image it captures, after that eventhough I move around the camera, the map won't change. In this case how can I record the rosbag to form the map, or is there any other possible solution, as long as I can form a map of surrounding, if its 2D map, or point cloud is OK for me.
What if I forget about the imu features, just form the without imu data, is it possible?
*For your information, I have these info: " Realsense Ros v2.3.2 Built with librealsense v2.50.0 Running with librealsense v2.50.0 Device usb type: 3.2"
The unite_imu_method command is unite_imu_method:=linear_interpolation
The D435i SLAM guide is very old and its component installation instructions are for ROS1 Kinetic from around the year 2016. For newer ROS1 versions such as Noetic I recommend adjusting the installation instructions to use the newer Noetic versions of the components if you have not done so already.
mu_filter_madgwick: sudo apt-get install ros-noetic-imu-filter-madgwick rtabmap_ros: sudo apt-get install ros-noetic-rtabmap-ros robot_localization: sudo apt-get install ros-noetic-robot-localization
An alternative to the D435i SLAM guide that does not need an IMU may be ORB_SLAM2_ROS, which is compatible with ROS1 and the IMU-less D435 camera model.
https://github.com/appliedAI-Initiative/orb_slam_2_ros
There is also a newer version called ORB_SLAM3 that has been tested with Noetic.
I'm currently using ros noetic and I installed mu_filter_madgwick: sudo apt-get install ros-noetic-imu-filter-madgwick rtabmap_ros: sudo apt-get install ros-noetic-rtabmap-ros robot_localization: sudo apt-get install ros-noetic-robot-localization Too from the beginning.
Actually I was previously working on orbslam3 too for few months, but I always failed to build the orbslam3 folder after many trials, but I still can't figure out the possible reasons, that's why I switch to the website SLAM with d435i.
What if I reinstall evrything such as the ubuntu 16 instead of ubuntu 20.04? In order to use this rtabmap, does it work? If I use ubuntu 16 I cannot use Raspberrypi5 is it?
How about this:" In this case how can I record the rosbag to form the map, or is there any other possible solution, as long as I can form a map of surrounding, if its 2D map, or point cloud is OK for me."
I would not recommend going back to Ubuntu 16.04 and Kinetic as the Kinetic branch of ROS1 is End of Life (retired).
Because of the problems with accessing the IMU of RealSense cameras on Raspberry Pi boards, any solution that uses gyro and accel data likely would not be suitable for Pi 4 or Pi 5.
Instead of using ROS, perhaps you could record a bag file of 2D data in the RealSense Viewer tool using its 'Record' button. You could then play back the bag in the Viewer and switch into the Viewer's 3D pointcloud mode to render the 2D bag data as a 3D pointcloud in real-time. The 3D pointcloud mode enables the pointcloud to be exported to a .ply pointcloud file that can then be loaded into 3D software for further editing, such as converting the pointcloud into a solid object.
If you know how to compile C++ programming code then you could also consider using the rs-kinfu tool, which progressively builds up a pointcloud as the camera is moved, similar to how the map in the D435i SLAM guide is generated. The pointcloud can then be exported to a .ply.
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/kinfu
I see thank you so much for your clarification! May i ask is there any suitable or useful website I can refer to on how to use realsensen viewer to record and process 2d data into a 3d point cloud?
The bag recording process in the Viewer is simple.
To stop recording and create the bag file, click the Record button again.
To play back the bag file in the Viewer, you can drag and drop the bag file into the center panel of the Viewer to automatically start playback.
Once playback has begun, click on the 3D option in the top corner of the Viewer window to switch to 3D pointcloud mode and automatically render the 2D data as a pointcloud.
To find the location on your computer that the bag file will be saved to, click on the gear-wheel icon at the top corner of the Viewer and select Settings from its drop-down menu to open the settings configuration interface.
When the settings interface pops up, click on the Playback and Record tab and look at the text beside 'Default recording folder'.
You might also find the link below's guide to exporting a 3D pointcloud from the Viewer's 3D mode to .ply file helpful.
https://www.andreasjakl.com/capturing-3d-point-cloud-intel-realsense-converting-mesh-meshlab/
Thank you so much for the guidance! I think I might still need to form a final map from the .ply file
If I use the 3d tools such as Meshlab, is that mean I am no longer using vslam for mapping already? Because my project is to use vslam technique to perform mapping.
In other way, do you mean after I get the .bag file using realsense-viewer, then I can proceed with the "roscore >/dev/null 2>&1 & rosparam set use_sim_time true rosbag play my_bagfile_1.bag --clock roslaunch realsense2_camera opensource_tracking.launch offline:=true"
And rosrun map_server map_saver map:=/rtabmap/proj_map –f my_map_1 to form the 2d map?
Actually may I clarify again regarding what does this line of code do? "Roslaunch realsense2_camera opensource_trackinv.launch offline" Is it once I have a recorded rosbag, I need to run "rosbag play my_bagfile_1.bag --clock", then proceed with the line of code as mentioned above? But when I launch this line, it will only launch a rviz with nothing inside and without reconfiguration, there is also no option of "map" and "amera_link" in the display panel, no pointcloud, and also there's no image shown. I'm not sure what is supposed to happen when running this code, can you explain to me regarding that?
And I have tried to record the video in realsense, and I get a .bag file now, what should I do next to get the final map? I'm expecting to get something like this: ![Uploading 20240616_155501.jpg…]()
https://github.com/stella-cv/stella_vslam
The procedure from the D435i SLAM guide that you describe is correct. However, bag files recorded with RealSense SDK programs such as the RealSense Viewer have some differences to ROS rosbags and so may not be fully compatible with playback in ROS.
When the opensource_tracking.launch
file is launched, it imports settings from the rs_camera.launch file and configures some additional settings that enable the camera to be used with RTABMAP.
The D435i SLAM guide has not been updated since January 2019, so it cannot be guaranteed that it will work correctly in the present day.
When It is functioning correctly, it is meant to look like the rotating map image under the text and the resulting point cloud in the guide.
Thank you fr your response!
Is that mean I cannot proceed with the ros code in slam with d435i guide using the .bag file recorded in relsensenviewer and I must go through the recording using "roslaunch realsense2_camera opensource_tracking.launch"?
I think the one i dont understand is about the code "roslaunch realsense2_camera opensource_tracking.launch offline:=true", which is run after replay the rosbag, normally even if I have done recorded a bag file using rviz, then after I replay the recorded rosbag and run this line of code: "roslaunch realsense2_camera opensource_tracking.launch offline:=true", I can't gt any output that allows me to proceed with and form the 2d map or pointcloud because I think in the slam with d435i it says: when the system is up, then only I cannot run the code to save the 2d map, therefore may I get some advices from you at this step? Regarding how can I convert my recorded .bag file (not from realsense-viewer) to the 2d map?
I think whether a bag file recorded in the Viewer will play in ROS will depend on the contents of the messages stored in the bag. There is no harm in trying playback of the file in ROS to see what happens.
I believe that you would need to include the IMU topic enabling commands in the offline launch in order for the bag's IMU data to be published.
roslaunch realsense2_camera opensource_tracking.launch enable_gyro:=true enable_accel:=true unite_imu_method:=linear_interpolation offline:=true
Yes I have tried to run the .bag file I recorded through realsense viewer, and followed with the code:"roslaunch realsense2_camera opensource_tracking.launch offiline:=true." It's just that we go back to the same question as I mentioned above in number 3, even though i can replay the bag, but I don't know how to use this code to get the final map, whether I need to run any other extra code, or do any configuration and etc, I always blur at this step, maybe I tried tk record a video to you for demonstration https://youtu.be/PExElKPb_08?si=girdvFsoaoC1Us7Y
Alright will try
Hello @MartyG-RealSense , any update regarding number 2?
If the D435i SLAM guide and ORB_SLAM do not work for you then the last available option with a chance of success may be the SLAM tool Kimera-VIO-ROS, which is compatible with Noetic.
Alright thank you for the suggestions, I'll try for that!
Anyway based on my previous comment, can you tell me whether I'm doing the replay of rosbag and forming the map in a correct way?
I did not see a problem in your replay procedure. However, because a RealSense SDK bag was being used instead of a ROS rosbag it may not succeed because of the differences between SDK bags and rosbags.
Actually the same thing happens even if I use the ROS rosbag too, whereby the rviz starts without configuration and no response even if the rosbag is running, therefore I'm not sure is it because I have missed some important steps or not.
Just to add on I have both -Ros-noetic-librealsense2 and -ros-noetic-realsense2-camera on my ubuntu, does this allowed when performing mapping using rtabmap?
I think that having ros-noetic-librealsense2 and ros-noetic-realsense2-camera on your computer means that the librealsense SDK and the ROS wrapper were both installed at the same time with the command below.
sudo apt-get install ros-$ROS_DISTRO-realsense2-camera
This installation method is called Method 1. Have you used this method, please?
Method 1 does not install realsense-viewer though. So if you used Method 1 and also have realsense-viewer on your computer, this would suggest that librealsense had been previously installed by a different method. In that situation there could be two different librealsense installations on the computer that are conflicting with each other.
-I followed the ros 1 legacy as suggested in the link: https://github.com/IntelRealSense/realsense-ros/tree/ros1-legacy
-And yes I have tried sudo apt-get install ros-$ROS_DISTRO-realsense2-camera.
-And yes I can also launch the realsense viewer on my ubuntu, is that mean I already have two redundant librealsense installation?
-I gt the below messages in the terminal:
pi@raspberrypi:-$ dpkg -1 | grep realsense
ii ros-noetic-librealsense2
2.50.0-1focal.20211115.135230
arm64 Library for capturing data from the Intel(R) RealSense(TM) S R300, D400, L500 Depth cameras and T2xx Tracking devices.
2.3.2-1focal.20240111.191354
ii ros-noetic-realsense2-camera arm64 RealSense Camera package allowing access to Intel T265 Track ing module and SR300 and D400 3D cameras
If you have realsense-viewer then yes, it is likely that you have (or have previously had) two librealsense installations on your computer that were installed by different methods.
A table of librealsense versions at https://github.com/IntelRealSense/realsense-ros/issues/3026#issuecomment-1972696034 that was produced by the same grep command that you used and has the same version numbers as you indicates to me that you have not got more than one librealsense version installed.
When you run realsense-viewer, which version number is it? The version number will be at the top of the viewer window.
For your information, I have these info: " Realsense Ros v2.3.2 Built with librealsense v2.50.0 Running with librealsense v2.50.0 Device usb type: 3.2"
Besides, I noticed from the comment https://github.com/IntelRealSense/realsense-ros/issues/3128#issuecomment-2170281414 Mentioning about the "roslaunch rtabmap_launch rtabmap.launch"
may I ask should I run this t code ogether with the roslaunch .. opensourec_tracking.launch in order to get the map? Can you give me more information about this, thank you for that!
Usually when using both the ROS wrapper launch file and RTABMAP launch file, rs_camera is launched first and then rtabmap.launch is launched secondly after the RealSense launch has completed.
roslaunch realsense2_camera rs_camera.launch
roslaunch rtabmap_launch rtabmap.launch
But how about the roslaunch realsense2_camera opensource_tracking.launch?
opensource_tracking.launch contains settings that enable the camera to be used with RTABMAP, so I guess that if rtabmap.launch
is used with rs_camera.launch (which lacks those RTABMAP settings) then rtabmap.launch takes care of configuring those settings instead.
Hi @Shrioe Do you require further assistance with this case, please? Thanks!
Case closed due to no further comments received.
Hi,
I am new to VSLAM. I am following the instructions of "SLAM with D435i tutorial" roslaunch realsense2_camera opensource_tracking.launch,
I encountered some errors as shown below:
ERROR] [1718263335.611787184]: Critical Error, NaNs were detected in the output state of the filter. This was likely due to poorly conditioned process, noise, or sensor covariances. [ WARN] [1718263335.612158811]: Failed to meet update rate! Took 0.0086891700000000012677 Error: TF_NAN_INPUT: Ignoring transform for child_frame_id "camera_link" from authority "unknown_publisher" because of a nan value in the transform (nan nan nan) (-nan nan -nan nan) at line 240 in /tmp/binarydeb/ros-noetic-tf2-0.7.7/src/buffer_core.cpp Error: TF_DENORMALIZED_QUATERNION: Ignoring transform for child_frame_id "camera_link" from authority "unknown_publisher" because of an invalid quaternion in the transform (-nan nan -nan nan) at line 255 in /tmp/binarydeb/ros-noetic-tf2-0.7.7/src/buffer_core.cpp
And I also faced issue when trying to replay the rosbag file after running the line: "roslaunch realsense2_camera opensource_tracking.launch offline:=true" , whereby it will only open a new rviz file without showing anything. May I ask what am I supposed to get at this step?
Could you give me some suggestions?
Thx!