Closed chivas1000 closed 1 year ago
same problem
Could you please tell us a little more about your setup? You have mentioned running with ROS2 Foxy but DP1.0 onwards only supports Humble. You should be getting two camerainfo topics from the RealSense camera for each of the imagers respectively that should be remapped to appropriate topics in Isaac ROS VSLAM. As for performance, there should be no difference between DP2.0 and DP1.1 releases. Could you tell us more about your evaluation method(s)?
Could you please tell us a little more about your setup? You have mentioned running with ROS2 Foxy but DP1.0 onwards only supports Humble. You should be getting two camerainfo topics from the RealSense camera for each of the imagers respectively that should be remapped to appropriate topics in Isaac ROS VSLAM. As for performance, there should be no difference between DP2.0 and DP1.1 releases. Could you tell us more about your evaluation method(s)?
Hi @swapnesh-wani-nvidia , thanks for the reply For the ROS, what I mean is that I installed ROS2 foxy outside the container, and I install the container as in this repo which is ROS2 humble and the following are all running in the container.
I test this function with the default launch files "isaac_ros_visual_slam_realsense.launch.py", here is the launch files with camera node configuration:
import launch from launch_ros.actions import ComposableNodeContainer, Node from launch_ros.descriptions import ComposableNode
def generate_launch_description(): """Launch file which brings up visual slam node configured for RealSense.""" realsense_camera_node = Node( name='camera', namespace='camera', package='realsense2_camera', executable='realsense2_camera_node', parameters=[{ 'infra_height': 360, 'infra_width': 640, 'enable_color': False, 'enable_depth': False,
#when first launch, the camera has IR dots, so I added depth_moudule emitter disabled, and successfully turned the IR emitter off, and, If the emitter are on, the "tracker is lost" warning is few, but the performance is even bad, If the emitter are off, the warning keeps popping and performance are equallly bad. 'stereo_module.emitter_enabled': False, 'depth_module.emitter_enabled': 0,
'infra_fps': 90.0
}]
)
visual_slam_node = ComposableNode(
name='visual_slam_node',
package='isaac_ros_visual_slam',
plugin='isaac_ros::visual_slam::VisualSlamNode',
parameters=[{
'enable_rectified_pose': True,
'denoise_input_images': True,
'rectified_images': False,
'enable_debug_mode': False,
'debug_dump_path': '/tmp/elbrus',
'enable_slam_visualization': True,
'enable_landmarks_view': True,
'enable_observations_view': True,
'map_frame': 'map',
'odom_frame': 'odom',
'base_frame': 'camera_link',
'input_left_camera_frame': 'camera_infra1_frame',
'input_right_camera_frame': 'camera_infra2_frame'
}],
remappings=[('stereo_camera/left/image', '/camera/infra1/image_rect_raw'),
('stereo_camera/left/camera_info', '/camera/infra1/camera_info'),
('stereo_camera/right/image', '/camera/infra2/image_rect_raw'),
('stereo_camera/right/camera_info', '/camera/infra2/camera_info')]
)
#/camera/realsense_splitter_node/output/infra_1
visual_slam_launch_container = ComposableNodeContainer(
name='visual_slam_launch_container',
namespace='',
package='rclcpp_components',
executable='component_container',
composable_node_descriptions=[
visual_slam_node
],
output='screen'
)
return launch.LaunchDescription([
visual_slam_launch_container,
realsense_camera_node
])
And this is one of the header of the /camera/infra1/camera_info, I don't know if it is right or wrong but I assume the info are subcribed by VSLAM node. header: stamp: sec: 1666677052 nanosec: 868972032 frame_id: camera_infra1_optical_frame height: 480 width: 848 distortion_model: plumb_bob d:
By the end, I noticed that the IR cam maybe bad in the exposures, and resolution are low(360p), are you use this VSLAM in realsense in this same way? For the evaluation, I supposed that the VSLAM are not work normally in the human sense(got mant drift and the pose don't follow my actual movement) , so I think some problem has been made otherwise such error would not be come up.
Also, I'm also working on this tutorial application: https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md it has a splitter node that the emitter on and off alternately so this frame is on and next frame is off, and it splits the off frame for IR cam output while on frame to ensure the depth cam performance, so the IR cam output would be 30fps, but the camera info stays the same(60fps), would this affect the performance? but this maybe another problem since I might need to get the above done so as to work on another.
I might get the camera calibration and do this again to see if it is the problem
Again, thanks for your reply, and any advises are welcome.
same problem I reinstalled OS using sdk Manager to jetson agx orin and installed dp2 isaac_ros_visual_slam, dp2 isaac_ros_nvblox I tested this function with the default launch files "nvblox_vslam_realsense.launch.py" using d435
I have the same issue. Although i don't know what to expect since i haven't used other versions of isaac ros.
I just realized that the RealSense parameters from the "isaac_ros_visual_slam_realsense.launch.py" aren't parsed. The infra cameras are still running at 480x848@30fps (at least on my install). Maybe the issue stems from a change in the realsense ros2-beta branch of realsense-ros?
A quick look at the changes suggest that some of the parameter config options were removed from the "rs_launch.py" file in realsense-ros as well.
In other words, maybe the break is from realsense-ros and not the upgrade from isaac_ros dp1.1->dp2.0?
A snappy response from the realsense-ros team confirmed my suspicion: https://github.com/IntelRealSense/realsense-ros/issues/2520#issuecomment-1293560464
I have changed the realsense example launch file configuring the realsense parameters and submitted a PR #57 The change didn't have a noticeable effect on the performance though, sadly.
We looked into this issue and looks like the interaction between realsense2_camera
package and tf2
package is causing the issue. As a workaround, do not pass these parameters in the isaac_ros_visual_slam_realsense.launch.py
launch file -
input_left_camera_frame
and input_right_camera_frame
. In this mode, camera info will be used to infer the transformation between left and right imagers.
Additionally, there have been a few changes in the parameters of the realsense node. Use 'depth_module.emitter_enabled': 0
to disable the emitter and 'depth_module.profile': 'widthxheightxfps'
to set the resolution and fps.
As a workaround, do not pass these parameters in the
isaac_ros_visual_slam_realsense.launch.py
launch file -input_left_camera_frame
andinput_right_camera_frame
. @swapnesh-wani-nvidia This fixed all my problems, thanks a bunch
We looked into this issue and looks like the interaction between
realsense2_camera
package andtf2
package is causing the issue. As a workaround, do not pass these parameters in theisaac_ros_visual_slam_realsense.launch.py
launch file -input_left_camera_frame
andinput_right_camera_frame
. In this mode, camera info will be used to infer the transformation between left and right imagers.Additionally, there have been a few changes in the parameters of the realsense node. Use
'depth_module.emitter_enabled': 0
to disable the emitter and'depth_module.profile': 'widthxheightxfps'
to set the resolution and fps.
Thanks and will test it as soon as I switch back to DP2.0
Thanks @swapnesh-wani-nvidia ! Your suggestions solved the problem. However, With fast motion the estimation can diverge. Would using the D435i IMU improve the estimation and make it more robust?
This hotfix updated the realsense launch file in this repo as per the discussions above.
Hi I'm using the VSLAM as a state estimator in nvblox with D435i as camera, x86 with nvidia GPU(Ubuntu 20.04.5LTS, ROS2 foxy) and using realsense vslam examples in launch folder of VSLAM and https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox/blob/main/docs/tutorial-nvblox-vslam-realsense.md
but it seems after DP2.0 release update, the VSLAM always show "tracker is lost"
there are very few feature points, but as I use VSLAM release DP1.2, there are many feature point output in rviz
I've followed the trouble shooting suggestions:
my stereo input is 60fps, and I use realsense D435i and splitted the camera image so there are no light dots in it As to the camera info, I'm not sure what is the "right" camera info, what I only knew is the camera info can be seen via rostopic echo.
What I want to ask is that is there any difference for DP2.0 to DP1.2 which would affect the performance?
and is there any ways to tune the VSLAM aside from the trouble shooting? Or for the elbrus VSLAM, can I tune configurations such as feature point number like ORBSLAM3?
And is that involving D435’s IMU worth a try?
Thanks.