I found that all the z-coordinate data of nusc.ego_pose are always 0:
Referring to the nuscenestutorial.ipynb guide, ego_pose contains information about the location (encoded in translation) and the orientation (encoded in rotation) of the ego vehicle, with respect to the global coordinate system._
If the recorded data is true, this means that the vertical height of the ego does not change even 0.001m in the global coordinate system. This is obviously impossible. I guess this may be because the dataset did not record the longitudinal translation of the vehicle during the annotating process.
However, the translation annotation for each object is also annotated in the global coordinate system. To ensure the correctness of the projection (to ego/sensor coordinate system) in a single frame, the z-value of the object is deliberately shifted from its true position in the world coordinate system to offset the error recorded by the ego pose.
For example, the following code shows translation annotations of a static parking vehicle (in scene-0034) in different frames. We can find that the z value has changed by 1.996.
I fully understand that this is an appropriate way to compensate for the ego Z-axis offset. But I'm doing some work involving ego trajectories, and I need to use the real longitudinal displacement value of the ego in the scene. Is there any way that I can directly/indirectly get the value of the ego?
I found that all the z-coordinate data of nusc.ego_pose are always 0:
Referring to the nuscenestutorial.ipynb guide,
ego_pose
contains information about the location (encoded intranslation
) and the orientation (encoded inrotation
) of the ego vehicle, with respect to the global coordinate system._If the recorded data is true, this means that the vertical height of the ego does not change even 0.001m in the global coordinate system. This is obviously impossible. I guess this may be because the dataset did not record the longitudinal translation of the vehicle during the annotating process.
However, the translation annotation for each object is also annotated in the global coordinate system. To ensure the correctness of the projection (to ego/sensor coordinate system) in a single frame, the z-value of the object is deliberately shifted from its true position in the world coordinate system to offset the error recorded by the ego pose.
For example, the following code shows translation annotations of a static parking vehicle (in scene-0034) in different frames. We can find that the z value has changed by 1.996.
I fully understand that this is an appropriate way to compensate for the ego Z-axis offset. But I'm doing some work involving ego trajectories, and I need to use the real longitudinal displacement value of the ego in the scene. Is there any way that I can directly/indirectly get the value of the ego?