nerfstudio-project / nerfstudio

A collaboration friendly studio for NeRFs
https://docs.nerf.studio
Apache License 2.0
8.95k stars 1.19k forks source link

Get depth value (in meters) out of colormap depth image #2147

Open Edward11235 opened 1 year ago

Edward11235 commented 1 year ago

My goal is to read the depth value (in meters) at a pixel.

After training my NeRFacto model, I use the following command to generate accumulation image:

ns-render camera-path --load-config ~/Desktop/outputs/poster/nerfacto/2023-06-20_223608/config.yml --image-format jpeg --jpeg-quality 100 --rendered-output-names accumulation --camera-path-filename ~/Desktop/data/nerfstudio/poster/camera_path.json --output-format images --output-path ~/Desktop/render_output/output.jpg --downscale-factor 1

And the following command to generate depth image: ns-render camera-path --load-config ~/Desktop/outputs/poster/nerfacto/2023-06-20_223608/config.yml --image-format jpeg --jpeg-quality 100 --rendered-output-names depth --camera-path-filename ~/Desktop/data/nerfstudio/poster/camera_path.json --output-format images --output-path ~/Desktop/render_output/output.jpg --downscale-factor 1

The two images I got are all color image. My question is, how to read the true depth value (in meters) out of the two color image? What I expected is a grayscale image. Thanks.

tancik commented 1 year ago

The NeRF model doesn't know the underlying "scale" of the scene so getting meter values is not possible.

AliYoussef97 commented 11 months ago

@Edward11235 @tancik I am not sure if this helps, but I am working on a project that also needs the depth in meters, and I found out that saving the depth output of the nerfacto model directly produces the depth in meters.

What I have done is comment out these lines and added the following

output_depth = output_image.cpu().numpy()

output_depth = output_depth.squeeze(2) 

np.save(f"{output_image_dir}/{camera_idx:05d}.npy",output_depth) 

I verified it by generating a pointcloud, importing it in blender and meauring the distance between random points at the object in the scene and the cameras, then I compared that distance at the pixel coordinates of the saved depth and they seem to align.

I hope that helps.

elenacliu commented 11 months ago

@AliYoussef97 hi, I have the same need and my solution is the same with yours!

AliYoussef97 commented 11 months ago

@elenacliu That's great to know!

sumanttyagi commented 1 month ago

@AliYoussef97 can you share the idea how this is giving absolute depth , as while reconstruction it gets distorted to get fit in certain scale how these absolute depth is still retained ?