ros-navigation / navigation2

ROS 2 Navigation Framework and System
https://nav2.org/
Other
2.52k stars 1.28k forks source link

Question: future planning for NAV2? #3639

Closed xianglunkai closed 1 year ago

xianglunkai commented 1 year ago

May I ask what are the main development directions for NAV2 in the future? From my daily work, I think the following can be some interesting and practical directions:

  1. Optimal scheduling planning for multiple robots

  2. Uncertain Decisions and Games

  3. Multimodal Feature learning

  4. End to end Reinforcement learning

SteveMacenski commented 1 year ago

Right now, its on planning and control improvements for real-world applications and newer features common to robotics applications. While the roadmaps page is a little out of date, my plan is to update it for the 2023-> J-turtle plans at the end of the month now that Iron is released https://navigation.ros.org/roadmap/roadmap.html

Before talking too much about the specifics, I think it would be helpful if you could tell us a little more about yourself. Are you a researcher, student, or working on a product? This is important context when talking especially about learning-based technologies about the quality level and hardware-readiness for real-world use.

None of those are on my current scheduled roadmap for me to spend Open Navigation time on, but that also doesn't mean that I can't help facilitate contributions or work from others in the community in those other areas in parallel. Facilitating contributions and projects to adopt into Nav2 is one of my major goals and I'm happy to review designs, help answer questions, and eventually work with you on quality / process to be placed into Nav2 itself. With that said, if its more research-based code that is not intended to be ready for real-world robots to use, we have other locations like nav2_auxiliary or your own personal github page which could host Nav2-integrated research code for others to build on, but themselves not being quite ready for primetime yet

  1. Undoubtedly useful if built generically for a large number of applications.
  2. You'll have to give some more context here.
  3. You'll have to give some more context here.
  4. This I have serious doubts as ever being practical (e.g. replacing all planning, control, and perception in favor for an end-to-end AI). I don't know that I've seen any examples of this working well enough in any domain
xianglunkai commented 1 year ago

@SteveMacenski Thank you very much for your reply!

I am a product developer mainly solves problems related to intelligent driving decision-making and control. Of course, I often have projects with multi robot scheduling.

  1. For a large number of small robot scheduling problems, recently I was considering learning based methods, especially Reinforcement learning, and I saw that JD did something similar;

  2. In intelligent driving, congestion and lane changing are common. However, due to the uncertainty of perception and prediction, the planning either fails to solve or oscillates back and forth. I see some people considering making decisions from the belief space to alleviate such problems. Also, is NAV2 considering the development of prediction modules?

  3. The sources of robot perception include lasers, cameras, maps, and other semantic information such as voice, video, and text. The fusion of these information may help robots make better behavioral decisions.

  4. The end-to-end solution is currently limited to academia and laboratories, but I am considering whether robots in low-speed enclosed working environments are more suitable for this method. The main consideration is whether this solution can achieve data-driven iterative improvement?

SteveMacenski commented 1 year ago

This might be a better conversation to have via Slack, you can find the invite link in the readme file, introduce yourself on the onboarding channel! For the time being, I don't think there is anything directly actionable to have a ticket open in the issue tracker for. Discussions are better in Slack, the working group, discourse, etc.

Keep in mind this is a AMR / mobile robotics stack, not an autonomous driving / on road driving stack. That's explicitly a non-feature of this work so the lane keeping / changing and such are not really on topic for this class of task.

Wrt end-to-end RL, if there's not any techniques you can point to that are production ready or showing proof points that they work well enough in some lab settings to be applicable in production, I don't think that's a topic worth exploring here. This is not a basic R&D stack, things that are provided here are expected to work, well, and reliably.