georghess / neurad-studio

[CVPR2024] NeuRAD: Neural Rendering for Autonomous Driving
https://research.zenseact.com/publications/neurad/
Apache License 2.0
346 stars 24 forks source link

queestion about front_cam_extrinsics #19

Closed blackmrb closed 6 months ago

blackmrb commented 6 months ago

In my understanding, front_cam_extrinsics refers to the pose of the camera relative to the lidar, so l2front_cam should be np.linalg.inv(_pandaset_pose_to_matrix(front_cam_extrinsics)). Is my understanding correct?

The code is here.

def _get_lidars(self) -> Tuple[Lidars, List[Path]]:
    """Returns lidar info and loaded point clouds."""
    poses = []
    times = []
    idxs = []
    lidar_filenames = []
    for i in range(PANDASET_SEQ_LEN):
        # the pose information in self.sequence.lidar.poses is not correct, so we compute it from the camera pose and extrinsics
        # the lidar scans are synced such that the middle of a scan is at the same time as the front camera image
        front_cam = self.sequence.camera["front_camera"]
        front_cam2w = _pandaset_pose_to_matrix(front_cam.poses[i])
        front_cam_extrinsics = self.extrinsics["front_camera"]
        front_cam_extrinsics["position"] = front_cam_extrinsics["extrinsic"]["transform"]["translation"]
        front_cam_extrinsics["heading"] = front_cam_extrinsics["extrinsic"]["transform"]["rotation"]

        l2front_cam = _pandaset_pose_to_matrix(front_cam_extrinsics)  # the extrinsics means front_cam2l? 

        l2front_cam = np.li

        l2w = torch.from_numpy(front_cam2w @ l2front_cam)
georghess commented 6 months ago

The extrinsics in the yaml-file where we load relative poses between sensors are defined as lidar2cam (the lidar in the different cameras reference system). So, the code in our repo is correct, but perhaps the naming is a bit confusing.