Open RaviBeagle opened 2 years ago
The following code gives me the correct results
def get_transform_lidar_to_camera(lidar_transform,camera_transform):
R_camera_vehicle = rotation_carla(camera_transform.rotation)
R_lidar_vehicle = rotation_carla(lidar_transform.rotation)
T_camera_vehicle = np.array([[camera_transform.location.x],[-camera_transform.location.y],[camera_transform.location.z]])
T_lidar_vehicle = np.array([[lidar_transform.location.x],[-lidar_transform.location.y],[lidar_transform.location.z]])
Tr_camera_vehicle = np.vstack((np.hstack((R_camera_vehicle, T_camera_vehicle)), [0,0,0,1]))
Tr_lidar_vehicle = np.vstack((np.hstack((R_lidar_vehicle, T_lidar_vehicle)), [0,0,0,1]))
return np.linalg.inv(Tr_lidar_vehicle).dot(Tr_camera_vehicle)
[[ 1. 0. 0. 0.27000001]
[ 0. 1. 0. 0. ]
[ 0. 0. 1. -0.08000004]
[ 0. 0. 0. 1. ]]
which is the correct transformation when the LIDAR is not at a 180 yaw to the camera..
Hello! Running this simulation is extremely slow in my case. I have noticed that if I change fps_simu to fps_simu=10 instead of 1000. Simulation runs faster, but I am not sure how it influence data generation and will it influence the final output which is supposed to be KITTI like. Do you have any suggestions on how I should approach this? Thanks
Hello @jedeschaud ,
The code for the function: