Open Saivigneshb15 opened 2 years ago
The relative positions of cameras and lidar are not fixed?
No, since the ego vehicle is moving, the position of camera and lidar which are in world coordinate system are also changing per each frame. So in this case if we are taking the extrensic matrix, then we will have 100 different extrensic matrix for 100 frames as the position of camera is also changing per each frame.
For an information, the position values are wrt world coordinate system
We need only the relative position between the camera and lidar, this should be constant even though their world position are both moving. this should be the case if your lidar and cameras are mounted on fixed positions on top of the ego car, and the relative position can be restored from the camera/lidar world positions. In other words, you expect the same result for all 100 frames (except for some small calculation error) if you compute the extrinsic matrix for each frame.
For an information, the position values are wrt world coordinate system
Sorry, I think I misunderstood the problem. I added a feature to support one distinct extrinsic matrix for every frame (react-app branch), if that's not too late :).
Hi naurril Thanks for the update. Could you please navigate me to the distinct extrinsic matrix update which you are have updated for the problem.
Hi naurril Thanks for the update. Could you please navigate me to the distinct extrinsic matrix update which you are have updated for the problem.
chekcout react-app
branch, note npm
is needed to compile frontend code, see installation
the extrinsic matrices should be arranged like this
SUSCTechPOINTS
data
+--scene001
+-- lidar
+ frame001.pcd //or .bin
+ ...
+-- camera
+-- front
+-- frame001.jpg
+-- ...
+-- left
+-- calib
+-- camera
+ front.json
+-- front
+--frame001.json
+--...
+ left.json
+-- left
for front camera as an example, you can put the static extrinsic matrix in calib/camera/front.json
as before. for each frame's distinct extrinsic matrix, put it into calib/camera/front/framexxx.json
.
//calib/camera/front.json
{
extrinsic: [...], //default static extrinsic matrix, 4*4
intrinsic: [...], //default intrinsic matrix of front camera, 3*3
}
// calib/camera/front/frame001.json
{
extrinsic: [...], // 4*4, if this is present, it's used as lidar-to-camera extrinsic matrix
lidar_to_camera: [...], //4*4, same as above with a more meaningful name
lidar_transform: [...], //4*4, if this is present, the lidar pints are transformed by it first and then
// the default static extrinsic/intrinsic matrix
//p_img = M_intrinsic * M_static_extrinsic * M_lidar_transfomr * p_lidar
// it's used if we have the lidar pose rather than camera pose.
intrinsic: [...] // if present, overwirte default intrinsic matrix
}
Thank you so much naurril Can also know how does the extrinsic matrix look like and if you have any script to generate please share it. If not the structure and parameters inside it.
Hi,
I am working with 100 sequence of frames which has 6 cameras and 1 lidar mounted on the top of the ego vehicle. I also have 6 cam positions and cam directions for each 100 frames. Since the cam position for each 100 frames change as the ego vehicle moves, then the extrensic matrix also changes for the entire sequence of 100 frames. So if we are considering one extrensic matrix for all 100 frames how can get this solution ?