PolyCam / polyform

Tools for working with raw Polycam data, including export for NeRF and other formats.
https://poly.cam
MIT License
155 stars 10 forks source link

LiDAR Point Cloud Alignment #11

Open alexrothmaier opened 3 months ago

alexrothmaier commented 3 months ago

I am currently using the LiDAR mode in Polycam to capture scenes and export raw data. While I know that I can directly export point clouds, what I want to do now is generate a point cloud for each image using camera parameters and depth maps, and then simply overlay them to create a complete scene point cloud. I have successfully generated a point cloud for each image, but I encountered a problem when fusing them together; there is an offset between the point clouds.

So the deprojection from 2D to 3D points in camera coordinates seems to work, but the inverse extrinsics matrices do not seem to properly align the point clouds.

Do you have any suggestions what could be the cause?

benedictquartey commented 1 month ago

Did you find a fix for this @alexrothmaier ? I am facing the same issue

alexrothmaier commented 1 month ago

Found a related thread on Reddit and we assumed that it does not work because the poses are corresponding to the RGB cameras' position. Hence there is a slight offset to the LiDAR sensor. I could not fix it and switched to StrayScanner to collect my RGB+D dataset.

benedictquartey commented 1 month ago

Thanks for saving me years haha! I wonder how the Polycam folks are able to get such good looking pointclouds with the same data.

xuyanging commented 1 month ago

hi, how to get point cloud, I am not sure whether there is point cloud file in raw data

xuyanging commented 1 month ago

I try to visulize camera pose (transfroms.json) and point cloud (.gltf) under same coordinate system, but it seems some wrong? could you help me to figure out, Thx ! image

benedictquartey commented 1 month ago

@xuyanging Typically to create a pointcloud you need to use the depth data and camera poses to backproject 2d pixels from rbg images into 3d space. You need to do this for all images and fuse the individual pointclouds to get one pointcloud of the entire scene. The problem even after doing this is that the fused pointcloud looks clustered and out of place, this is probably due to some alignment issue with the camera poses, I am trying to work on a fix. I will share the code once i figure it out.

xuyanging commented 1 month ago

@benedictquartey

Thanks for your response,

I check output file and find mesh_info.json is important, after appling alignmentTransform matrix, it seems right (as bellow) image

but a new problem is that I try to apply intrinsic of camera to visulize point cloud at each view image, but it seems not correct. image

Entongsu commented 1 month ago

Hi, @xuyanging. I have downloaded the image files from the Polycam website, but I am having trouble understanding the camera pose information provided. The camera parameters in the folder seem unusual and don't make sense to me, particularly the values for cx, cy, fx, and fy.

Could you please provide some guidance or clarification on how these camera pose parameters are generated or how they should be interpreted? Any suggestions or resources you can offer would be greatly appreciated.

xuyanging commented 1 day ago

Hi, @xuyanging. I have downloaded the image files from the Polycam website, but I am having trouble understanding the camera pose information provided. The camera parameters in the folder seem unusual and don't make sense to me, particularly the values for cx, cy, fx, and fy.

Could you please provide some guidance or clarification on how these camera pose parameters are generated or how they should be interpreted? Any suggestions or resources you can offer would be greatly appreciated.

@Entongsu cx, cy, fx, fy is camera intrisic parameter, you can refer to this website for more infomation https://www.baeldung.com/cs/focal-length-intrinsic-camera-parameters