yenchenlin / nerf-pytorch

A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
MIT License
5.37k stars 1.04k forks source link

Rendered depth video is black #59

Open gkouros opened 2 years ago

gkouros commented 2 years ago

I try to train a NeRF on the lego bulldozer scene, and although the RGB video comes out as expected, the depth video is just black. I'm using torch==1.8.1 and torchvision==0.9.1 with cuda-10.2. Any idea, what could be going wrong?

renliao commented 2 years ago

Hello! I have the problem also, did you solve it successfully?

gkouros commented 2 years ago

Unfortunately, I haven't figured this out yet.

yashbhalgat commented 2 years ago

You need to normalize the depth_map (or disp_map) values to the range of [0.0,1.0] before visualizing them or saving as images.

Simply doing disp_map = (disp_map - disp_map.min()) / (disp_map.max() - disp_map.min()) will work.

gkouros commented 2 years ago

@yashbhalgat I already tried that before creating the disparity video, but it seems that the disparity contains nans which is the main problem. At what point do you perform the normalization? Btw, I see that the nans originate from the following line https://github.com/yenchenlin/nerf-pytorch/blob/a15fd7cb363e93f933012fd1f1ad5395302f63a4/run_nerf.py#L299

vanshilshah97 commented 2 years ago

Were you able to resolve the issue ?

lachholden commented 2 years ago

I'm also having this issue – there are occasional artefacts that pass through the video, but most of the time it's just black.

Yuchen-Song commented 2 years ago

Have you fixed that problem?

rockywind commented 1 year ago

Hi, What's the difference between disparity map and depth map? How can I get the real world scale depth map?