Open Buddies-as-you-know opened 10 months ago
I'm not very familliar with isaac_ros_visual_slam
but
it seems isaac_ros_visual_slam
uses stereo camera and IMU. Since you can use camera component to publish image topic https://rapyutasimulationplugins.readthedocs.io/en/devel/doxygen_generated/html/d9/d91/class_u_r_r_r_o_s2_camera_component.html
Currently RapyutaSimulationPlugins doesn;t have IMU sensor, we need to add it.
rclUE/RapyutaSImulationPlugins are mainly tested in Ubuntu20.04/22.04 and ROS 2 foxy/humble
Is it possible to integrate not only isaac_ros_visual_slam but also vision's slam? I want to autopilot turtlebot from images. I want to use slam from a monocular or depth image in UE to perform auto navigation with nav2, but I don't know how.
You can use depth camera by setting URRROS2CameraComponent::CameraType = DEPTH CameraType = EROS2CameraType::RGB
Would it be advisable to set this up on the Unreal Engine side? Encoding: RGB ->depth
ros2 launch rtabmap_demos turtlebot3_scan.launch.py Is the rtabmap running on the depth camera?
Here is camera type setting
Overview
isaac_ros_visual_slam
provides SLAM (Simultaneous Localization and Mapping) capabilities, and I am particularly interested in its application within the real-time 3D environment of Unreal Engine.Objective
isaac_ros_visual_slam
, I aim to seamlessly combine robot operations in Unreal Engine with ROS2 functionalities.Questions
isaac_ros_visual_slam
into rclUE?isaac_ros_visual_slam
impact data exchange between Unreal Engine and ROS2?Additional Information
isaac_ros_visual_slam
would be highly appreciated.