Closed SmashChen closed 1 year ago
In general this code-base does not use a unified coordinate system. The coordinate system of different data is in changes throughout the code, you will need to track the changes to understand them.
In principle the LiDAR uses the CARLA coordinate system which is (x-forward, y-right, z-up) yes. However the LiDAR is rotated by -90°. The reason is the the 10 Hz frequency (we rotate it so we get front back sweeps instead of right left). So the points coming from the sensor will be in x left, y front, z up system with respect to the ego vehicle.
You can also go through the closed issues in the repo, I have answered questions about coordinate systems at other places as well.
In general this code-base does not use a unified coordinate system. The coordinate system of different data is in changes throughout the code, you will need to track the changes to understand them.
In principle the LiDAR uses the CARLA coordinate system which is (x-forward, y-right, z-up) yes. However the LiDAR is rotated by -90°. The reason is the the 10 Hz frequency (we rotate it so we get front back sweeps instead of right left). So the points coming from the sensor will be in x left, y front, z up system with respect to the ego vehicle.
You can also go through the closed issues in the repo, I have answered questions about coordinate systems at other places as well.
another similar question,
R = np.array([
[np.cos(np.pi/2+ego_theta), -np.sin(np.pi/2+ego_theta)],
[np.sin(np.pi/2+ego_theta), np.cos(np.pi/2+ego_theta)]
])
why ego_theta from IMU needs to add pi/2?
Thanks a lot for your reply
np.pi/2 represents a 90° rotation. I think this is because the compass uses a system where north is left North is (0.0, -1.0, 0.0) w.r.t. the default CARLA coordinate system. The 90° rotation turns north to front.
The "target_point" coordinate is relative to the vehicle coordinate system (x-forward, y-right). In the "draw_target_point" function, you convert it to the lidar coordinate system through "target_point[1] += 1.3". Whether the input form of "target_point" is ( y, x)? Why does the y coordinate value of "target_point" need to be reversed before entering the GRU module(target_point[:, 1] *= -1)? The problem of coordinate system conversion has been bothering me. It would be better if there are detailed comments for many steps involving coordinate transformation in the code. Thank you for your reply.
It would be better if this mess wouldn't exist in the first place. This codebase has grown historically and contains code from different repos. We will adress this in future work. I don't think the target_point[:, 1] *= -1 conversion is necessary per say.
Thanks a lot for your contribution. I don't really understand the orientation of the vehicle coordinate system. We can confirm that the orientation of the lidar coordinate system is (x-forward, y-right, z-up). According to the "get_lidar_to_vehicle_transform()" function you defined in "utils.py", it can be inferred that the orientation of the vehicle coordinate system is (x-right, y-back, z-up), which is different from the general cognition. Please tell me the direction of the vehicle coordinate system you defined, and the vehicle coordinate system in Carla is it unified? I will be very grateful if you can clear up my doubts.