uzh-rpg / agile_autonomy

Repository Containing the Code associated with the Paper: "Learning High-Speed Flight in the Wild"
GNU General Public License v3.0
607 stars 164 forks source link

potential & concerns #64

Open coffear opened 2 years ago

coffear commented 2 years ago

This end-to-end motion planning strategy was realized by taking advantage of AI and NPU development, a pioneering work. However, I doubt quite much about the practical application.

What I have learned from this work are:

  1. The actual inputs of the network are the processed depth image and its corresponding flying robot's state, and the goal direction. Therefore, one can use fake depth to train the network
  2. The range of the depth matters to the training. Even the depth information very far from the robot can affect the output of the network.
  3. the raw output network is the exact training result, why does it need to be fitted again before being sent to the controller? Does this mean the network is not trained well?

What I am concerned about is, how far is it from practical applications. The training is the key to the whole work. The combination of the depth and odometry information can be infinitely large, and the amount of these different scenarios can affect the network model. Therefore, the training work would be extremely hard, in order to get a proper model that works for most of the application scenarios.