Open SteveMacenski opened 3 years ago
@SteveMacenski, there are new files modification regarding the stereo support for ROS2 in my last PR. I appreciate it if you could review it. I used the @ymd-stella work as a base and follow the same approach. I think it would be a good idea to tag the next steps with respect to ROS2. Thanks in advance.
@mirellameelo I think you should split your PR into 2. Pushing too much in a PR makes things slow down dramatically. One for doc updates and other for a big feature update would be best so that @ymd-stella can quickly review
A concern I have here is that the ROS wrapper isn't making use of ROS time, so there's going to be issues in synchronizing the outputs of OpenVSLAM with the rest of a ROS-based system since the timing of the output poses and transforms may not be from the same clock. I think all times (for mono also) should be based on the ROS clock provided in the node
. In many cases, it will still be chrono time, but not for simulation or if someone wants a specific clock for their nodes to use.
Similar with logging, all of them use std::cout
rather than the node's logger (which also logs to file for review of any errors later)
https://github.com/OpenVSLAM-Community/openvslam/tree/main/ros/2/src/publisher I wonder if it wouldn't also make sense to delete these publisher directories. Obviously these are provided using ROS drivers of cameras so this seems unnecessary specifically for ROS.
Thanks for replying. I agree regarding the timers and loggings. I'll change it. Alghtout there are ROS2 packages responsible to publish frames, I think it is interesting just from a didactic point of view to keep this "publisher". These ones are aligned with the example to run in ROS. So I believe is a practical and easy way to do at least a test, especially for new users, besides its use as a reference to add new publishers for different dataset patter.
For those in the Nav2 slack group, we're discussing on the mapping and localization channel how to port ROS2 + expose all the stuff necessary.
Also working on making the parameters ROS-y so that people can modify the OpenVSLAM settings like any other ROS node
Copying the task plan from Slack discussion:
[In progress by @mirellameelo in OpenVSLAM-Community/openvslam#41] task1: Support for Stereo cameras in ROS2 OpenVSLAM task2: Support for RGB-D cameras in ROS2 OpenVSLAM [Done by @mirellameelo in a repository fork] task3: Also, it is need to add TF2 in there. Currently, there is nav_msgs::msg::Odometry pose messages are being published. I think we need to have supplementary full "map->odom" frames transform for ROS2 SLAM/localization nodes.
Also the rectification for calibrated cameras is made only for ROS1, so we have to: task4: Add camera images rectification into ROS2 slam/localization nodes. Here is a few open questions: How could we verify the camera calibrations? Can we use a Gazebo simulator, say on TurtleBot3 model? Does this model support a non-calibrated cameras simulation? Real H/W verification also does not hurt. All these questions are in the scope under investigation of this task.
After all nodes will be prepared, this might be an extra activity: task5: Make unification process of what we've done. This will be required to get one ROS2 node acting in both SLAM or Localization modes (selected by ROS2 parameter), and also supporting all types of previously elaborated cameras (mono/stero/rgbd).
Currently, OpenVSLAM has only internally used map database which is not ROS2 representative. Therefore we need task6: Investigate and extract Occupancy Grid map (and/or PCL map reconstuction) from OpenVSLAM internals. This could be divided into 2 parts (@SteveMacenski suggestion):
For this task I have a personal interest, since I previously was touched OccGrid maps making in a Visual SLAM and also having an experience of working with maps (map server re-designs and costmap filters development).
task7: Based on above task there should be prepared probably documentation update for OpenVSLAM
The ROS wrappers have been migrated to an independent repository https://github.com/OpenVSLAM-Community/openvslam_ros. Please contribute there. If you want to create a new ROS issue, please create it in openvslam_ros. The issues that already exist can be left. The ros directory will be removed after confirming that there is no problem and updating the documentation.
@ymd-stella should you remove the ros/
directory in openvslam
then?
I see a main
branch, what version is that? I think you can delete ros
since noetic-devel
is the last ROS1 so there's nothing else to add to later. For now, I'd recommend also removing foxy-devel
until we have something for ROS2 working on ros2
worth releasing to foxy (e.g. its alot of effort to keep those in sync while we're still developing it)
Can you transfer this issue to there?
The current ROS version of openvslam_ros works even before noetic, as long as the dependent OpenCV versions are resolved correctly. Therefore, I will delete the noetic-devel branch and keep the ros branch.
What's the main
branch in this context?
main
was the default branch.
The ros2
branch is now the default. (I deleted the ros2
branch and renamed the main
branch to ros2
.)
@AlexeyMerzlyakov please provide a concise, bulleted list of things to do to go from where we are until a full release of a functioning ROS2 integratable openvSLAM system (support rgbd, stereo, ROS timing, parameters, calibration, documentation, testing, CI, release, occ grid, etc) so that we can see explicitly the status of all of the ongoing things / future subtrasks
Sure, the scope of current activities is listed below (will be dynamically updating as tasks are completed or appeared):
Development items
Testing
Documentation
- testing of major features, including Stereo, RGBD, localization in prior map, and timing (make sure we didnt break anything)
- TF transformation parameterized by publish_tf bool parameterization (default true)
- CPU/memory benchmarking
- parameterize topic names / frames
- full-system testing in a reasonable sized environment on a robot and record dataset, localize in sometime later in different lighting conditions as a demo (office building, etc)
These items are clear. Agree.
- Publish Odometry or Pose message for fusion into R_L if fusing with multiple SLAM/localization systems
Could you please give more context on this? Currently, the pose (nav_msgs::msg::Odometry
) is being published by openvslam-ros in a "camera_link" frame. Did you mean adding a TF-reader to obtain arbitrary robot geometry conversions to publish pose in "odom" frame?
- Localization mode respond to a relocalization request from topic (like AMCL does with initial_pose)
Should we always re-localize a robot when initial_pose
messages arrive, or only in the case when tracking is lost?
Currently, the pose (nav_msgs::msg::Odometry) is being published by openvslam-ros in a "camera_link" frame.
That's fine then, as long as its not removed and the frame of the camera is actually representative of the input topic frame ID.
Should we always re-localize a robot when initial_pose messages arrive
Yes, if something comes from that, that's the user, it should be followed.
Thanks for noting! Updated https://github.com/OpenVSLAM-Community/openvslam_ros/issues/2#issuecomment-800310426 with items from above.
How fast is the post estimated in openVSLAM (e.g. how often do we update the pose and worth updating the map->odom transform)?
This topic came up in the Nav2 WG meeting - traditionally we update slowly around 1hz or something for lidar slam so we have enough new features to have a real lock on the changes and rely on odometry for local positioning inbetween.
But VSLAM can update much, much faster which begs the question:
Hi,
I notice only monocular is supported in ROS2 (with ROS1 having all 3). This is something me and my colleague can help commit to implementing as part of the Nav2 / ROS2 integration work. We'll likely have other features / things we want to expose (like adding a feature map publishing topic, among others) but figured this is a good place to get the ball rolling.