nlbucki / RAPPIDS

Rectangular Pyramid Partitioning using Integrated Depth Sensors (RAPPIDS): A Fast Planner for Multicopter Navigation
GNU General Public License v3.0
35 stars 13 forks source link

Questions about sampling trajectory to controller #1

Open HenryHuYu opened 2 years ago

HenryHuYu commented 2 years ago

Hi ! Thank you for your great work. I notice that in the trajectory generation part. you set the initial position, velocity and accelerate to 0;


    int GetNextCandidateTrajectory(
        RapidQuadrocopterTrajectoryGenerator::RapidTrajectoryGenerator& nextTraj) {
      CommonMath::Vec3 posf;
      _planner->DeprojectPixelToPoint(_pixelX(_gen), _pixelY(_gen),
                                      _depth(_gen), posf);
      nextTraj.Reset();
      nextTraj.SetGoalPosition(posf);
      nextTraj.SetGoalVelocity(CommonMath::Vec3(0, 0, 0));
      nextTraj.SetGoalAcceleration(CommonMath::Vec3(0, 0, 0));
      nextTraj.Generate(_time(_gen));
      return 0;
    }

to my understanding the sampled reference point from trajectory is sent to controller in the world frame. So I have 3 questions as below:

  1. Is the initial reference point sampled from trajectory being the same with the drone's current state(p, v, a) or it is (0,0,0)?
  2. if the first sampled point of the trajectory being the drone's current state, and the drone is replanning fast(frequently), we notice that the drone is going to diverge, could you help how to solve this?

we solved the above question by init the new trajectory with the current sampled point like below

    RapidTrajectoryGenerator rappid_traj_temp = RapidTrajectoryGenerator(
        Vec3(last_sample_p_in_camera_(0), last_sample_p_in_camera_(1), last_sample_p_in_camera_(2)),
        Vec3(last_sample_v_in_camera_(0), last_sample_v_in_camera_(1), last_sample_v_in_camera_(2)),
        Vec3(last_sample_a_in_camera_(0), last_sample_a_in_camera_(1), last_sample_a_in_camera_(2)),
        gravity_in_camera_);

however, i noticed that when replanning happens, if the first sampled point of new trajectory is the same with the current sampled point(like the code above). the Closed form condition in RAPPIDS(traj(t) = ct^5 + ct^4 + ct^3 + ct^2+ ct + c intersect with a plane knowing one intersection point has a closed form of other intersection point) does not meet, how to solve this problem?

thank you again!

nlbucki commented 2 years ago

Hi, glad to see you're interested in the project! If I remember correctly, in DepthImagePlanner the trajectories are expressed in the camera-fixed frame at the time of planning, and the positions are written with respect to the position of the camera at the time of planning. That means that each candidate trajectory has an initial position of (0, 0, 0), and an initial velocity/acceleration given by whatever state estimation you are using. If your velocity/acceleration estimates are expressed in the world frame, you will need to rotate them into the camera-fixed frame before calling the trajectory generator. Then, after the trajectory is generated, you will likely need to translate it from the camera-fixed frame back into the world frame.

For example, if at the time of planning (i.e. t = 0) the camera has position camera_position (expressed in the world frame) and orientation world_R_camera (i.e. the rotation between the world frame and camera frame) , then you would want to do something like: world_position = world_R_camera trajectory.GetPosition(t) + camera_position world_velocity = world_R_camera trajectory.GetVelocity(t) world_acceleration = world_R_camera * trajectory.GetAcceleration(t)

I'm not sure why your drone is diverging from the generated trajectory, as there could be many reasons. Unfortunately I'm also having trouble understanding your last question about the closed form condition not meeting; maybe you're talking about the assumption in the planner that the initial position is (0, 0, 0)?

racheraven commented 2 years ago

I have encountered question 2 as well. I would like to restate this question to make it clearer.

As far as I know, all the trajectories RAPPIDS replans start at the place where the quadrotor currently is (e.g. the odometry from VIO), regardless of where the quadrotor should be (i.e. the sample point from the previously planned trajectory).

For example, the previously planned trajectory would like the quadrotor to be at x, y, z, in the world frame, at time t0. We may call this x, y, z a reference point. Suppose we are using a PID controller to control the quadrotor. Due to the disturbance, the quadrotor is at x', y', z', in the world frame, at time t0. If the replanning takes place at t0, the reference point will suddenly jump back to x', y', z' at t0, because RAPPIDS enforce the new generated trajectory starts at x', y', z'.

This sudden change of reference point is unbearable for most controller, I am afraid to say. Because most controllers depend on the difference in reference point and current point position (i.e. feedback ) to calculate control command. The sudden drop in feedback is causing divergence in trajectory following of a quadrotor.

Take PID as an example, if the PID output a velocity command, then v_command = Kp(P_reference - P_current) + v_reference, or something like this. Whenever the replanning happens, the feedback in this equation falls back to zero. If the replanning takes place frequently enough, v_command will approximately be equal to v_current(because P_reference = P_current, v_reference = v_current). As the planned trajectory is continuous, the v_command will be close to v_current until the next replanning begins. And therefore the quadrotor's velocity will always be v_current, or very close to the v_current. This, in our experiment, causes quadrotor to move further and further away in one direction, away from the final goal.

This problem is not unique to PID controller. Other controllers like geometric controller also has the same behavior. In fact, I believe all controllers with a feedback term cannot endure such a drop in reference point. How did you overcome this problem? We are eager to hear from you!

Thanks for your time!

nlbucki commented 2 years ago

Ah, I see your problem now. Yes, you are correct that using purely a feedback controller will result in the behavior you described, as you essentially always have zero tracking error. My intuition is that you may not be using (or perhaps are using incorrectly) the feedforward terms given by the generated trajectory. For the trajectories used in this work (i.e. fifth-order polynomials in time), the feedforward control inputs are the total thrust produced by the propellers and the angular velocity of the vehicle. Here we are assuming that the angular velocity of the vehicle can be controlled sufficiently fast such that it can essentially be treated as a control input.

You can get these feedforward terms using GetThrust() and GetOmega() from the RapidTrajectoryGenerator class. Note that GetOmega() returns the angular velocity of the vehicle expressed in the camera-fixed frame at the time of planning, so you probably would want to transform it into the body-fixed frame before using it, e.g. like reference_angular_velocity = body_now_R_world * world_R_body_at_time_of_planning * body_R_camera* traj.GetOmega(t, dt), where body_now_R_world is the current attitude of the vehicle w.r.t. the world frame, world_R_body_at_time_of_planning is the attitude of the world frame w.r.t. the body frame at the time of planning, and body_R_camera is the attitude of the camera relative to the body-fixed frame (always constant).

You then would want to incorporate these quantities into some type of tracking controller. As a rough sketch, you might do something like the following (there are also probably different/maybe better architectures you could use, this is just an example):

acceleration_error = K_p * (traj.GetPosition(t) - current_position) + K_d * (traj.GetVelocity(t) - current_velocity)
total_thrust_feedback = acceleration_error.Dot(world_R_body_now * Vec3d(0,0,1))
total_thrust_cmd = traj.GetThrust(t) + total_thrust_feedback

Then, you would compute the reference attitude world_R_body_ref from traj.GetAcceleration(t) (which gives you the reference thrust direction) and the desired vehicle yaw (you choose this depending on your application). You can then compute your angular velocity feedback term ang_vel_feedback using whatever method you like (e.g. compute the roll/pitch/yaw error and apply a proportional feedback gain, or do some geometric control-type error).

ref_ang_vel = body_now_R_world * world_R_body_at_time_of_planning * body_R_camera* traj.GetOmega(t, dt)`
ang_vel_cmd = ref_ang_vel + ang_vel_feedback

Thus, when you replan and the reference state jumps to the current state of the vehicle, the terms total_thrust_feedback and ang_vel_feedback should indeed go to zero because you will suddenly have zero tracking error. However, by applying the feedforward terms traj.GetThrust(t) and ref_ang_vel, the vehicle should track the planned trajectory (assuming reasonably small model errors). This is basically just a model-predictive-controller where the control inputs are total thrust and angular velocity, but with a feedback controller used between MPC replanning steps, if you want to think of it that way.