HKUST-Aerial-Robotics / Fast-Planner

A Robust and Efficient Trajectory Planner for Quadrotors
GNU General Public License v3.0
2.4k stars 665 forks source link

Few questions regarding the kino_replan demo #52

Closed FaboNo closed 4 years ago

FaboNo commented 4 years ago

First of all, thank you for sharing this great piece of work!

I am testing the kino dynamic part and I have few questions for you:

In kino_replan.launch you mentioned that we need to choose between depth image or point cloud but actually in sdf_map.cpp, the /sdf_map/cloud topic is subscribed anyway so comment <arg name="cloud_topic" value="/pcl_render_node/cloud"/> in the kino_replan.launch file leads to an error at start.

Capture du 2020-09-04 10-39-53

and sometimes the drone is flying thru obstacles: Capture du 2020-09-03 17-50-38

Do you know how can I handle these issues (or is it because my computer is too slow)? How to provide a new traj which is within a known small range from the current drone pose? fsm/thresh_replan is a parameter to tell when to replan and has no effect on the trajectory tracking.

ZbyLGsc commented 4 years ago

@FaboNo

  1. If you need to use CPU simulator and comment the cloud_topic, then you can not receive the cloud generated by pcl_render_node. Why comment it?

  2. What is the replanning time printed by the planner? If it takes much more than a few milliseconds on your computer, there will be the mentioned issue. The colored obstacles are actually inflated to compensate for the drone's size. The true obstacles are showed in transparent grey. Therefore as long as the drone does not pass through the grey obstacles it is safe. If you want to inflate more or increase clearance, you may adjust params in the planner.

  3. Yes, in a future release.

  4. You may try the interactive marker in RVIZ and subscribe to the marker to select targets with more flexibility.

Feel free to comment if you have other question, and cloud you please star this repo? Thanks:)

FaboNo commented 4 years ago

@ZbyLGsc thank you for your answers. Actually I already starred it when you published the first version (dyn_planner) :)

After checking the cpu load, I saw that the simulator took 100% of one core (oups) and I saw that the parameter rate/simulator was set to 1000 in quadrotor_simulator_so3.cpp - by the way this parameter is not set in the file simulator.xml. I set it to 100 and now it works very well.

It leads me to a practical question: when you implement the algorithm on a real quad, how did you fine tune the different timers (because all callbacks are triggered by timers) in order to smoothly follow the trajectory? Indeed you have to take into account the DJI FCU, the SO3 controler (which I guess is not the one implemented in the simulation) and the dynamic of the real quad. As I am planning to test it on real, I will really appreciate your "on the field" feedback :)

Although it is not optimized, plan_end stores data of the entire explored world, am I right? Also why did not use FIESTA ?

Finally my last question as you use the D435i, I guess you used the factory values for the intrensic parameters?

Cheers

ZbyLGsc commented 4 years ago

The answer is, when going to field experiment, we never need to tune the timer. All our code can be shifted to a onboard computer directly. The only thing we need to take care is the topics of odom, depth and pose, etc, something like them. We use an Intel i7 computer which suffice to run all the modules.

I think you are talking about plan_env. Actually the map fusion part is the exactly the same as FIESTA (raycasting), while we implement a different algorithm to update ESDF. The reason is we found that FIESTA is capable of updating ESDF for large space incrementally, but when it comes to small local region, it is slower than the algorithm in SDFMap. Since Fast-Planner focus more on local planning, faster local ESDF generation is desired, so we use the later method.

For the D435 camera, we use a tool in VINS_FUSION to calibrate it.

FaboNo commented 4 years ago

@ZbyLGsc thank you again for your answers.

To clarify what plan_env does, each time your receive new sensor data, the map is reset and a local ESDF map is created, am I right? Therefore you are not keeping a global map in the simulation. I guess because you want to keep the ESDF calculation fast enough and also because fastPlanner is a local planner working in a receding horizon.

Now is it possible to create a global map with plan_env - i.e. do not reset the buffers , like for instance you explore an environment and you want the drone to come back to is start position and compute a path based on what is known from the environment only? In this case I will probably need another "global path planner" that will provide intermediate waypoints to the fast planner?

I starred the repo:)

ZbyLGsc commented 4 years ago

Actually we have allocated memory for a large region, but in each update loop we only maintain information (occupancy and ESDF) in a small local region. Info outside this local region is cleared. The operation of clearing the map is done in sdf_map.cpp, line 652-722. If you do want to maintain a global map, simply comment these lines and all data will be kept.

If you use the kino_replan, no global planner is needed (unless in very complex maze like environments). If you use topo_replan, you may need a global planner to provide a global reference trajectory.

FaboNo commented 4 years ago

@ZbyLGsc thanks again for your kindness, here is my last question (hopefully). In SDFMap::projectDepthImage(), there is a use_depth_filter parameter set to true, and can you tell me more about its utility besides filtering values below and above a given threshold? This is the one that you are using in real experiments?

ZbyLGsc commented 4 years ago

The filter only does one thing more than direct projection: it throws away very small depth value and truncates huge depth. It is a naive (yet effective) strategy to deal with noisy depth measurement in real world experiment.

FaboNo commented 4 years ago

@ZbyLGsc Yes I agree with you it is an effective strategy, this is just these few lines than I do no understand. Looks like you are down sampling the image in order to extract half depth points of the whole image?

for (int v = mp_.depth_filter_margin_; v < rows - mp_.depth_filter_margin_; v += mp_.skip_pixel_) {

        row_ptr = md_.depth_image_.ptr<uint16_t>(v) + mp_.depth_filter_margin_;

        for (int u = mp_.depth_filter_margin_; u < cols - mp_.depth_filter_margin_;  u += mp_.skip_pixel_) {
ZbyLGsc commented 4 years ago

Yes, full image leads to too many points, which burdens the raycasting process. Since the map is discretized into voxels of 0.1m size, you would find out many projected points actually lie inside the the same voxels. Therefore it is reasonable to down sample the image. It speeds up the fusion greatly, with only minor loss of information.

FaboNo commented 4 years ago

@ZbyLGsc Sorry I forgot to close the thread and thanks again for your explanations