Thanks for your novel work and I am using this model as our baseline. However, we tested the goal sampling network directly using the goals in goals_ynet.pickle and find that they have already reached 10.6847 of FDE. You can reproduce this result with the following code:
with open("data/SDD/test.pickle", 'rb' ) as f:
test_data = pickle.load(f)
test_data = np.concatenate(test_data[0], axis=0)
with open("data/SDD/goals_Ynet.pickle", "rb") as f:
goals_ynet = pickle.load(f)
goals_ynet = np.concatenate(goals_ynet[0], axis=0)
fde = np.linalg.norm(goals_ynet-test_data[:, -1], axis=-1).mean(-1)
print(fde)
# 10.684658352813022
Thanks for your novel work and I am using this model as our baseline. However, we tested the goal sampling network directly using the goals in
goals_ynet.pickle
and find that they have already reached 10.6847 of FDE. You can reproduce this result with the following code:Comparing with FDE score of YNet which is 11.85, I guess the major difference is due to the inconsistent data preprocessing as mentioned in https://github.com/realcrane/Human-Trajectory-Prediction-via-Neural-Social-Physics/issues/13#issue-1605149076. In addition, better goals lead to the better waypoints and thus can make the ADE scores better. Therefore, I do think the results in Table 1 on SDD dataset are unfair to YNet.
Can you double check on this?