Open OdedHorowits opened 3 months ago
Hi! Thanks for the new issue. No, ORB-SLAM3 cannot directly be used for navigation on the fly. Simply because it doesn't provide any 2D/3D occupancy map. What you can get out of this is an accurate localisation estimate and the optimised keyframe poses. These poses are the previously traversed areas of the robot. You'll need a separate mapping algorithm if you want to do this and later feed that map to a navigation stack.
Thank for your response. I edited the former post, plz check it. mainly, I ask -
And, if I start flying around, would the poses
array keep grow? Does it store all the previously traversed areas?
Hi,
So /drone0/map_points
will have messages only if the ros_visualization
parameter is set to true in the params file. This is still under development, I have not been able to test it.
To answer your questions.
global frame
. In this case, the map
frame.true
, I will need more information before proposing a solution.Thank you so much for taking the time for thinking and answering. Regarding the last question - if I start flying around, would the poses array keep grow? does it store all the previously traversed areas?
Yes, you are right. The pose array will keep growing as long as the SLAM is running and it doesn't lose tracking. Yes, it is the previously traversed areas.
Hi Does the behavior shown here is ok? doesn't seem so...
Hi! Thanks for the new issue. No, ORB-SLAM3 cannot directly be used for navigation on the fly. Simply because it doesn't provide any 2D/3D occupancy map. What you can get out of this is an accurate localisation estimate and the optimised keyframe poses. These poses are the previously traversed areas of the robot. You'll need a separate mapping algorithm if you want to do this and later feed that map to a navigation stack.
Looking at the ORB-SLAM3: Map Viewer
app, I see the following after moving the drone in a rectangular path:
How do I get the data for those blue signs? That is actually the localization data, where the drone thinks he is.
Hi, regarding the TF Issue, I'll have a look at what's going on. Can you give me more information about your setup? Are you running with or without odometry? If the drone doesn't provide odometry data then use the wrapper with the parameter no_odometry_mode
set to true.
For the second comment, you should be able to find this information on a topic named map_data
. I think this was the topic name or should be a name similar to this. The data you see in the viewer is published at a fixed frequency on this topic.
regarding the TF Issue
so the setting is correct, i.e. no_odometry_mode: true
.
Can you give me more information about your setup?
I am using Aerostack2. What information can i give that would help?
Hi, I tried to recreate this issue. I was unable to do it. I set no_odometry_mode to true and passed only two topics into the docker container. The rgb image and the depth image and it seems to run fine. Can you please try the following?
colcon build --symlink-install
after you have changed the parameter file. Try doing a clean build by deleting the build and install directories in the workspace.ORB_SLAM3_RGBD_ROS2:
ros__parameters:
robot_base_frame: base_footprint
global_frame: map
odom_frame: odom
robot_x: 0.0
robot_y: 0.0
visualization: true
ros_visualization: false
no_odometry_mode: true
Sorry for the late reply, the issue is still in a blackbox :p I think more debugging might be needed.
I don't see any TF section in rviz :-(
Edit: ok found that: in rviz (Clicking the "Add" button, then select the "TF" display type) That gave me:
What makes me think about the first part of the params file -
ORB_SLAM3_RGBD_ROS2:
ros__parameters:
robot_base_frame: base_footprint
global_frame: map
odom_frame: odom
Using ros2 run rqt_tf_tree rqt_tf_tree
I can create the following image:
However, it also repeatedly prints the following errors to the shell where I run the rqt_tf_tree tool:
at line 209 in ./src/buffer_core.cpp
Error: TF_NO_CHILD_FRAME_ID: Ignoring transform from authority "default_authority" because child_frame_id not set
at line 217 in ./src/buffer_core.cpp
Error: TF_NO_FRAME_ID: Ignoring transform with child_frame_id "" from authority "default_authority" because frame_id not set
at line 224 in ./src/buffer_core.cpp
That is very similar to the errors I get when launching the orb_slam wrapper.
Hi!
So, after making it work comes the real question - How can use the results of the algorithm? Lets say I move my drone around and map the area. How can I use that map? Can I use it on-the-fly for navigation?
In details: The ORB_SLAM3 app publishes into to topics:
ros2 topic info /drone0/map_data
says it usesslam_msgs/msg/MapData
andros2 topic info /drone0/map_points
says it usessensor_msgs/msg/PointCloud2
Trying to echo
/drone0/map_points
gave me nothing. I though that the reason is the msg is too big, but using rviz I get nothing also when adding a PointCloud that subscribes to this topic. Echoingslam_msgs/msg/MapData
however gives me some results. The structure of the msg is a following:Using a python subscriber I can extract the results, that looks like this:
And some questions: why do the optimized graph have no Frame ID? why there are no keyframes?
Thanks