NVIDIA-AI-IOT / Lidar_AI_Solution

A project demonstrating Lidar related AI solutions, including three GPU accelerated Lidar/camera DL networks (PointPillars, CenterPoint, BEVFusion) and the related libs (cuPCL, 3D SparseConvolution, YUV2RGB, cuOSD,).
Other
1.27k stars 222 forks source link

BEVFusion errors with different lidar configuration #112

Closed alre5639 closed 2 months ago

alre5639 commented 1 year ago

After configuring all the the model with all my sensor intrinsic/extrinsics (6 camera, 1 lidar) I am still getting very poor results. Looking at the visualization .jpg I noticed that the framework is generating an image where my vehicle is rotated 45 degrees clockwise. This is due to the fact that our lidar is mounted 45 degrees clockwise (about the verticle axis) of the nuscenes lidar. I assumed since all the features are being mapped into the BEV feature space that the model would account for lidar mounted at different angles but given the poor results I am looking for something to blame. I have attached the middle part of the cuda-bevfusion jpg for reference. I would appreciate idea for troubleshooting warped_lida

alre5639 commented 1 year ago

After Transforming the lidar2ego and pointcloud into the same configuration as the nuscenes configuration the results are much more tractable, there are even some predictions that seem reasonable given that my intrisnics/extrinsics are off. However there is no detecting occuring in front of the vehicle. There are 2 cars clearly parked in the right of the scene that are not detected even though they are in view of the camera and the lidar.

Is there an easy way to compare the voxelized BEV feature locations of the camera and the lidar to do some fine tuning of the feature overlap? Additionally although I am using a different camera configuration I have not changed the following files. Please let me know if this would lead to poor performance

Further I generate all extrinsics from camera2ego, lidar2ego and camera_intrisics using the following script. Let me know if any of this looks suspect:

def gen_inverse_homo_transform(transform_mat):
    a = np.empty_like(example_Lidar2camera[0,0,:,:])
    a[:3,:3] = np.linalg.inv(transform_mat[:3,:3])
    # #this is the translation portion of the inverse NOTE THE NEGATIVE SIGN
    a[:3,3] = -np.linalg.inv(transform_mat[:3,:3])@transform_mat[:3,3]
    a[3,:] = np.array([0.,0.,0.,1.])
    return a

print(gen_inverse_homo_transform(example_Camera2ego[0,0,:,:])@example_Lidar2ego[0,:,:])
print(example_Lidar2camera[0,0,:,:])

lidar2camera = np.empty_like(example_Lidar2camera)
lidar2camera[0,0,:,:] = gen_inverse_homo_transform(Camera2ego[0,0,:,:])@Lidar2ego[0,:,:]
lidar2camera[0,1,:,:] = gen_inverse_homo_transform(Camera2ego[0,1,:,:])@Lidar2ego[0,:,:]
lidar2camera[0,2,:,:] = gen_inverse_homo_transform(Camera2ego[0,2,:,:])@Lidar2ego[0,:,:]
lidar2camera[0,3,:,:] = gen_inverse_homo_transform(Camera2ego[0,3,:,:])@Lidar2ego[0,:,:]
lidar2camera[0,4,:,:] = gen_inverse_homo_transform(Camera2ego[0,4,:,:])@Lidar2ego[0,:,:]
lidar2camera[0,5,:,:] = gen_inverse_homo_transform(Camera2ego[0,5,:,:])@Lidar2ego[0,:,:]

tensor.save(lidar2camera, "better_test_data/lidar2camera.tensor")

######gen camera2lidar
camera2lidar = np.empty_like(example_Camera2lidar)
camera2lidar[0,0,:,:] = gen_inverse_homo_transform(lidar2camera[0,0,:,:])
camera2lidar[0,1,:,:] = gen_inverse_homo_transform(lidar2camera[0,1,:,:])
camera2lidar[0,2,:,:] = gen_inverse_homo_transform(lidar2camera[0,2,:,:])
camera2lidar[0,3,:,:] = gen_inverse_homo_transform(lidar2camera[0,3,:,:])
camera2lidar[0,4,:,:] = gen_inverse_homo_transform(lidar2camera[0,4,:,:])
camera2lidar[0,5,:,:] = gen_inverse_homo_transform(lidar2camera[0,5,:,:])

tensor.save(camera2lidar, "better_test_data/camera2lidar.tensor")

##########gen lidar2image
lidar2image = np.empty_like(example_lidar2image)
lidar2image[0,0,:,:] = intrinsics[0,0,:,:]@lidar2camera[0,0,:,:]
lidar2image[0,1,:,:] = intrinsics[0,1,:,:]@lidar2camera[0,1,:,:]
lidar2image[0,2,:,:] = intrinsics[0,2,:,:]@lidar2camera[0,2,:,:]
lidar2image[0,3,:,:] = intrinsics[0,3,:,:]@lidar2camera[0,3,:,:]
lidar2image[0,4,:,:] = intrinsics[0,4,:,:]@lidar2camera[0,4,:,:]
lidar2image[0,5,:,:] =intrinsics[0,5,:,:]@lidar2camera[0,5,:,:]

lidar_rotated_github

jorgemn9 commented 2 months ago

Hello! Did you find any solution?

alre5639 commented 2 months ago

@jorgemn9 This was a while ago but I believe I solved this by re-calibrating my camera and lidar with the matlab calibration toolbox, if I rember right I had issues with overlapping cameras, so I ended up croping the images at intersections but I was able to get the framework running and making reasonable detections. Good luck!