SysCV / shift-dev

SHIFT Dataset DevKit - CVPR2022
https://www.vis.xyz/shift
MIT License
101 stars 10 forks source link

How to get extrinsics? #6

Closed haradatm closed 1 year ago

haradatm commented 2 years ago

Thank you for the great dataset, I found a dataset suitable for training DROID-SLAM. How can I get the external parameters or camera poses? I believe it is not included in {det_2d,det_3d,det_insseg_2d}.json.

mattiasegu commented 2 years ago

Hi @haradatm, thanks for your interest in our dataset! You are right that the sensor extrinsics have not been provided yet. We are working on it and will update you in the next few days on the status!

yjsx commented 2 years ago

Hello, thanks for your amazing dataset. Howerver, during my exploration on the dataset, I find I need the camera poses too. I would like to know if you have any progress on it?

What's more, I have tried to use the extrinct parameters of cameras in config/sensors.yaml to align the point cloud which is generated from depth image of different views. But, I find it's seemed that there is small error in the translation part. For example, I get the extrinct parameters of "left_45" is [0.145, 0.1, 1.6], but not [0.356, -0.356, 1.6]. I don't know why it happens. Maybe you know the reason?

At last, thanks for your dataset again!

GANWANSHUI commented 1 year ago

Hi, May I ask that how do you get the inconsistent extrinsic parameters?

suniique commented 1 year ago

Hey @GANWANSHUI @haradatm @yjsx, thanks for the questions! We have recently fixed the label files and uploaded the LiDAR pointclouds data as well as LiDAR sensor extrinsics. For the coordinate system details, you are welcome to find more information in Get started's annotations section.

For the problem @yjsx mentioned, could you please recheck them with the new label files? Would you mind telling us the exact sequence number if the problem persists?

haradatm commented 1 year ago

Hi @suniique thanks for the good information. Actually, what I want is a camera poses. How about the camera poses?

GANWANSHUI commented 1 year ago

Hi @haradatm , I also want to obtain the camera pose relative to the vehicle framework. I try to transform the rotation angle to the rotation matrix along the Z axis by:

np.array([[math.cos(theata), -math.sin(theata), 0], [math.sin(theata), math.cos(theata), 0], [0, 0, 1]])

But the result seems incorrect. It would be great if the author could help to provide the camera pose in the rotation matrix format.

ejoTwEWVkZ

suniique commented 1 year ago

Hey @GANWANSHUI, thanks for the question! From your screenshot posted, I hypothesize that your initial pose of the camera is incorrect. All sensors have an initial pose at the origin and head toward the x-positive in the world coordinate system.

You cloud get the rotation matrix via scipy,

from scipy.spatial.transform import Rotation as R

rot = np.array(frame.extrinsics.rotation)
rot_matrix = R.from_euler("xyz", rot, degrees=False).as_matrix()

To make things easier, I have also created a script for camera pose visualization at shift_dev/vis/sensor_pose.py for reference. FYI, here is an expected result from the script (sequence id = dcfd-67f5), where the black dot denotes the starting pose in that sequence.

iShot2022-09-20 03 31 00

@haradatm @yjsx, you can also check for this. Let me know if there is still something unclear! 😄