Dear Participants,
We regret to inform you that due to unforeseen paperwork delays, we are unable to annotate and prepare the competition data on time. As a result, the IEEE ITSS Student Competition on Pedestrian Behavior Prediction will be postponed.
We understand the inconvenience this may cause and sincerely apologize for it. We expect the data to be ready in another 1-2 months, and the competition is likely to resume in September when the fall semester begins. However, at this moment, we are unable to provide an exact timeline.
We greatly appreciate your interest and enthusiasm for the IEEE ITSS Student Competition on Pedestrian Behavior Prediction. We ask for your patience and understanding as we work through these delays. Please keep an eye out for further updates and additional notices.
Thank you for your continued support.
Pedestrian behavior prediction is one of the most critical challenges for fully automated driving in urban settings. It requires autonomous vehicles to interact safely and efficiently with pedestrians in diverse and dynamic environments. Accurate and robust pedestrian behavior prediction is crucial to ensure the safety of both pedestrians and autonomous vehicles.
PTP forecasts a pedestrian's future trajectory from a bird's-eye view, using observed data from six surrounding cameras and lidar. The Short-Term Prediction (ST) targets a 3-second future path, while the Long-Term Prediction (LT) extends to 7 seconds.
We invite competitors from all around the world. Each team's leader must be a current undergraduate or graduate student. Teams are limited to entering one track only.
Winning teams are expected to present their results in the IEEE ITSC 2024 conference.
We postponed the competition's starting date by a month due to data preparation delays. Demo data with labels have now been released. We encourage interested teams to start preparing algorithms and use public benchmark datasets to pretrain their models.
Data Demo: June 15th
Competition Begins: July 15th
Submission Deadline: September 5th
Our dataset consists of a total of 500 scenarios, which are split into training, validation, and test sets with a ratio of 70%, 10%, and 20%, respectively. Each scenario is at least 20 seconds long. The data provided includes:
Lidar Frame: Captured at 10 FPS, with annotations provided at 1 FPS.
Surrounding Cameras: Six camera images captured at 10 FPS.
Extrinsic and Intrinsic Matrix: Provided for calibration.
The annotations on the lidar frame include the following objects:
Training and Validation Data will be released by July 15th when the competition starts.
All the metrics are averaged over the test samples.
To ensure the accuracy and robustness of the pedestrian behavior prediction models, the following evaluation metrics will be used:
Average Displacement Error (ADE)
ADE = (1 / K) * Σ[ sqrt((x_k - x̂_k)² + (y_k - ŷ_k)²) ]
where K
is the total number of annotated key points, (x_k, y_k)
is the ground truth position at key point k
, and (x̂_k, ŷ_k)
is the predicted position at key point k
.
Final Displacement Error (FDE)
FDE = sqrt((x_K - x̂_K)² + (y_K - ŷ_K)²)
where K
is the final annotated key point, (x_K, y_K)
is the ground truth final position, and (x̂_K, ŷ_K)
is the predicted final position.
Miss Rate (MR)
MR = (1 / N) * Σ[ 1(sqrt((x_K^i - x̂_K^i)² + (y_K^i - ŷ_K^i)²) > δ) ]
where N
is the total number of predicted trajectories, 1
is the indicator function, and δ
is the distance threshold.
Collision Rate (CR)
CR = (1 / N) * Σ[ 1(collision(i)) ]
where collision(i)
indicates whether the i
-th predicted trajectory collides with any obstacle.
Submission Format
Data Splitting
Prediction Horizon
Frame Rate Adjustment
![]() |
Nobuyuki Ozaki Nagoya University |
![]() |
Lingxi Li Purdue University |
![]() |
Jing Chen Rice University |
![]() |
Yaobin Chen Purdue University |
![]() |
Zhengming Ding Tulane University |
![]() |
Xin Hu Tulane University |
![]() |
Renran Tian North Carolina State University |
![]() |
Shaozhi Wang North Carolina State University |
![]() |
Zhengming Zhang Purdue University |
:bulb: This competition focuses on Short-Term Pedestrian Trajectory Prediction (ST-PTP) and Long-Term Pedestrian Trajectory Prediction (LT-PTP). PTP forecasts a pedestrian's future trajectory from a bird's-eye view, utilizing observed data from six surrounding cameras and lidar. The Short-Term Prediction (ST) targets a 3-second future path, while the Long-Term Prediction (LT) extends to 7 seconds.