Open qiyang77 opened 1 year ago
I think this is because Nerfstudio normalizes the extent of the cameras to -1 to 1
I was wrong, and I also noticed that the exported pcd was slightly rotated.
Do you have camera optimization enabled?
Thanks for your hint! I added '--pipeline.datamanager.camera-optimizer.mode', 'off','nerfstudio-data','--orientation-method', 'none', '--center-method', 'none', '--auto-scale-poses', 'False'
and the scaler and rotation are gone! However, I found the Nerf may not converge when I set '--auto-scale-poses'
to False
, even if the extent of the camera locations have been normalized into (-1,-1,-1) to (1,1,1) or (-2,-2,-2) to (2,2,2)
@qiyang77 Use the exported pcd with exported camera pose rather than camera poses from tranform.json, doing so you will have both exported camera pose and pcd in nerfstudion coordinate frame and hence it should resolve your issue. While exporting camera pose you need to note that current implementation doesnt export optimized camera pose. for exporting optimized camea pose check https://github.com/nerfstudio-project/nerfstudio/pull/1191#issuecomment-1766034335
How can I detect an object in an RGB image and map that object to a point cloud?
I visualized the cameras and the exported pointcloud via open3d but I found there is a scale mismatch between them. The extent of camera positions (extracted from transforms.json) is much bigger than the pcd. All parameters were the default setting of nerfstudio. I am wondering if there is a scale factor that I missed or something else is wrong