Closed Alexwxwu closed 2 years ago
Nice work. I have a question about the ego vehicle's future planning. In this work, it has been used as input. But for testing and evaluation, how can we get the ego vehicle's future planning trajectory if we do not have the ground truth? It seems that the ego vehicle's future planning trajectory is generated from the ground truth in the code. Line 325 in data.py:
` def getPlanFuture(self, dsId, planId, refVehId, t): # Traj of the reference veh refColIndex = np.where(self.Tracks[dsId - 1][refVehId - 1][0, :] == t)[0][0] refPos = self.Tracks[dsId - 1][refVehId - 1][1:3, refColIndex].transpose() # Traj of the planned veh planColIndex = np.where(self.Tracks[dsId - 1][planId - 1][0, :] == t)[0][0] stpt = planColIndex enpt = planColIndex + self.t_f + 1 planGroundTrue = self.Tracks[dsId - 1][planId - 1][1:3, stpt:enpt:self.d_s].transpose() planFut = planGroundTrue.copy() # Fitting the downsampled waypoints as the planned trajectory in testing and evaluation. if self.fit_plan_traj: wayPoint = np.arange(0, self.t_f + self.d_s, self.d_s) wayPoint_to_fit = np.arange(0, self.t_f + 1, self.d_s self.further_ds_plan) planFut_to_fit = planFut[::self.further_ds_plan, ] laterParam = fitting_traj_by_qs(wayPoint_to_fit, planFut_to_fit[:, 0]) longiParam = fitting_traj_by_qs(wayPoint_to_fit, planFut_to_fit[:, 1]) planFut[:, 0] = quintic_spline(wayPoint, laterParam) planFut[:, 1] = quintic_spline(wayPoint, *longiParam)'
revPlanFut = np.flip(planFut[1:,] - refPos, axis=0).copy() return revPlanFut `
We stated that in the quantitative experiments ego planning is obtained from the ego's GT but degraded, as we don't exactly know the future states of surrounding agents when the ego's planning is changed.
While in our qualitative experiments (active planning & user study), the sets of diverse plans are produced by trajectory generator. Not limited to [4], different sampling-based trajectory generators or some vehicle motion controllers could be employed to provide diverse ego plans, to investigate how the multi-agent prediction result varies with the provided ego plan.
Nice work. I have a question about the ego vehicle's future planning. In this work, it has been used as input. But for testing and evaluation, how can we get the ego vehicle's future planning trajectory if we do not have the ground truth? It seems that the ego vehicle's future planning trajectory is generated from the ground truth in the code. Line 325 in data.py:
` def getPlanFuture(self, dsId, planId, refVehId, t):
Traj of the reference veh