HKUST-Aerial-Robotics / EPSILON

MIT License
641 stars 232 forks source link

Is it appropriate to replan using state on `executing_traj_` at current plan cyccle's terminal timestamp? #25

Open AmosMichael opened 1 year ago

AmosMichael commented 1 year ago

Hi, I'm reading yours great Epsilon paper but I don't get enough background knowdege to fully understand this paper. Some content makes me confused. Hope to get some feedbacks.

  1. Is it proper to replan using state on executing_traj_ at current plan cyccle's terminal timestamp`?
    Code in the link is as following:

      if (executing_traj_->GetState(t, &desired_state) != kSuccess) {
        printf("[SscPlannerServer]Cannot get desired state at %lf.\n", t);
        return;
      }

    It uses state on executing_traj_ at current plan cyccle's terminal timestamp to replan? I think it's better to use ego's real state instead of state on executing_traj_. Right?

  2. Is it necessay to run ssc traj planner after eudm? Decision part (eudm) already gives ego's forward trajectories in closed loop simulations. Is it necessay to run ssc traj planner after eudm?
    As far as I know, trajectories given by edum already considers ego's kinematics constraints (by using idm and mobil model to generate latitude and longitude behavior, then pure persuit control model giving proper control commands) and surrounding obstacles (by collision checking).

MasterIzumi commented 9 months ago

@AmosMichael Sorry for the late reply.

  1. It's a good question. Let's denote the starting state of the replan as "plan_state", and the question is about how to determine the plan_state during replanning. In our implementation, we use the current ego state as the initial state in the first plan cycle, and evaluate a desired plan_state on the last planned trajectory in the following replan cycles. The main reason is that generating plan_state on the previous planned trajectory brings better consistency in the replan cycles, which is partially due to Bellman's Principle of Optimality:

    An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision.

In practice, although the environment and target state will change during each planning cycle, generating plan_state on the previous "optimal" trajectory brings better consistency.

Another reason is that this method decouples the planning module and control module, we do not want the tracking error of the control module influence the planning part. In practice, we also need to monitor the displacement between the plan_state and the real ego state (from the positioning systems). If the tracking error reaches a pre-defined threshold, the plan_state will be reset to the current ego state.

  1. SSC generates trajectory (Bezier spline) based on the coarse plan from EUDM. It is much smoother and more fine-grained compared with the discrete results of EUDM. I would personally think SSC is not necessary if you have another "heavier" motion controller, such as MPC, since the result of EUDM is usually good enough to be a reference trajectory.