lifuguan / XLD_code

XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis
https://3d-aigc.github.io/XLD
9 stars 1 forks source link

Merged lidar point cloud is very chaotic #1

Closed fengaba1818 closed 3 weeks ago

fengaba1818 commented 1 month ago

Hi, thank you for your excellent work! I followed the instructions in readme.md and executed merge_lidar.py , but I found that the generated merge_cloud.txt is very chaotic. Could you please guide me on how to obtain the correct point cloud? Here is a merged cloud picture of carla_pic_0603_Town02: 65f93354efa5b810bd254a5ef276bebb Thank you!

lifuguan commented 1 month ago

Sorry for the late response, we transfer the coordinates to OpenGL to fit the NeRFStudio.

If you just want to visualize the merged point clouds with MeshLab, you can replace with the following code:

def generate_point_cloud_text(scene_path):
    start_timestep, end_timestep = 5, 155
    point_cloud_files = []
    poses = [] 
    for t in range(start_timestep, end_timestep,3):
        with open(os.path.join(scene_path, "train_pic", f"train_camera_extrinsics_{t:06d}.json"), 'r') as file:
            ego_to_world_current = np.array(json.load(file)['transform_matrix'])
            # lidar_to_world = pose_unreal2opengl(ego_to_world_current)
            lidar_to_world = ego_to_world_current
        lidar_file = os.path.join(scene_path, "train_pic", f"train_lidar_{t:05d}.ply")
        poses.append(lidar_to_world)
        point_cloud_files.append(lidar_file)

    # read and transform the point clouds
    point_clouds = []
    for file, pose in zip(point_cloud_files, poses):
        points = read_ply(file)
        opengl_point_cloud = points.copy()
        # opengl_point_cloud[:, 0] = points[:, 0]
        # opengl_point_cloud[:, 1] = points[:, 2]
        # opengl_point_cloud[:, 2] = points[:, 1]
        transformed_points = transform_point_cloud(opengl_point_cloud, pose)
        point_clouds.append(transformed_points)

Visualization results are shown below:

image