ZJU-FAST-Lab / ego-planner-swarm

An efficient single/multi-agent trajectory planner for multicopters.
GNU General Public License v3.0
1.13k stars 222 forks source link

UAV unable to replan due to terminal point of trajectory in collision #52

Open Swadine opened 1 year ago

Swadine commented 1 year ago

1 A major problem I've noticed with the planner is that it can't plan when tall or wide structures are in front of it, and because of its limited FOV, it gets stuck in an infinite loop of re-planning. This is what replan looks like in rviz -

ezgif com-video-to-gif If you look closely in the point cloud, there is a tall obstacle in front of the drone, my theory is that its limited FOV makes it impossible to plan through.

This is what the ego_planner node continuously prints: 截图自 2023-03-13 02-17-02

The node also prints this if it's of any help, what does "base_point and control_point too close" mean? 截图自 2023-03-14 15-44-19

I wonder if we need hamiltonian perturbation to solve this problem?

2 I also noticed the fact that although $dist=0.5$ the drone is still afraid to enter obstacles more than 2 meters most of the time, while again when it is slightly out of its range it will directly hit the obstacle when it yaws suddenly. I think this is due to lack of mapping with history.

3 I want to ask why start_pos is different from odom_pos in ego_replan_fsm , according to me, the trajectory should start planning the drone from the current position, otherwise, if the controller lags slightly, the trajectory will run without considering the position of the drone. This raises the question, shouldn't we have closed-loop feedback between the controller and the trajectory planner? Any idea how to merge?

@bigsuperZZZX Any help in modifying the codebase is appreciated. Thank you.

Swadine commented 1 year ago

edit:

1 was a similar question raised here and the only solution that I could think of as of right now is that I give the drone a randomly sampled point that satisfies my goal and keep giving it a different local goal as long as it somehow explores and reaches the required goal. Do you think this approach is correct?

3 I fixed using the discussion here.

bigsuperZZZX commented 1 year ago

Q1: Just try it. Q2: Try to reduce the speed and obstacle density.