Closed shanxiaojun closed 1 year ago
@Quantum-matrix you could try something like this:
from nuscenes.nuscenes import NuScenes
nusc_ = NuScenes(version='v1.0-mini', dataroot='/data/sets/nuscenes', verbose=False)
sensor = 'CAM_FRONT'
sample = nusc_.sample[100]
sd_token = sample['data'][sensor]
sd = nusc_.get('sample_data', sd_token)
cs_token = sd['calibrated_sensor_token']
cs = nusc_.get('calibrated_sensor', cs_token)
print(cs)
The above would give you:
{
"token": "89b9d32d6d084953b8a7a1584e7e21f0",
"sensor_token": "725903f5b62f56118f4094b46a4470d8",
"translation": [
1.72200568478,
0.00475453292289,
1.49491291905
],
"rotation": [
0.5077241387638071,
-0.4973392230703816,
0.49837167536166627,
-0.4964832014373754
],
"camera_intrinsic": [
[
1252.8131021185304,
0.0,
826.588114781398
],
[
0.0,
1252.8131021185304,
469.9846626224581
],
[
0.0,
0.0,
1.0
]
]
}
Note that all extrinsic parameters are given with respect to the ego vehicle body frame (as per here)
Thanks for your kind hint. However, I still don't know how to get a camera pose from camera parameters.
The camera pose would be given by the translation
and rotation
entries in cs
The camera pose would be given by the
translation
androtation
entries incs
Hello. I think that the 'translation and rotation' you mentioned above is just used to convert the ego(the car itself) coordinate system to the sensor coordinate system. In brief, what we want to know is how can we transfer a point in picture with 2 dimensions to a world coordinate system with 3 dimensions?
The translation
and rotation
entries in cs
would enable you to transform from the sensor frame to the ego frame, and then you can transform from the ego frame to the global frame
Hello, I am trying to make my own dataset. Could you tell me how to get the translation and rotation from the camera frame(camera coordinate) to the ego frame.
@fatejzz if your own dataset is in the same format as nuScenes, you can follow the above discussion to obtain the transformation you desire
Well, actually I want to know the way to get such a transformation. After reading the details about the Nuscenes dataset on the website, I found you get such transformation by calibrating between lidar and camera (Camera extrinsics) and lidar and ego(LIDAR extrinsics).
I want to use Nuscenes dataset for a Neural Radiance Fields method which needs camera pose. I wonder how to get camera pose from Nuscenes' raw data. It's better in colmap format.(It's ok if it's not in colmap format) . Thanks in advance.