yenchenlin / nerf-pytorch

A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
MIT License
5.34k stars 1.04k forks source link

about function get_rays #53

Open UestcJay opened 2 years ago

UestcJay commented 2 years ago

many thanks for your great work! I read the code ,have some questions, What does this line of code mean?

vanshilshah97 commented 2 years ago

This is changing the pixel values to rays its a camera model concept. basically doing K.inv@pixel_value

daixiangzi commented 1 year ago

+1

ivashmak commented 9 months ago

Why it has positive x-value and negative y and z-values? dirs = torch.stack([(i-K[0][2])/K[0][0], -(j-K[1][2])/K[1][1], -torch.ones_like(i)], -1)

If you do K.inv@pixel_value then your result should be: dirs = torch.stack([(i-K[0][2])/K[0][0], (j-K[1][2])/K[1][1], torch.ones_like(i)], 1)

or at least: dirs = torch.stack([-(i-K[0][2])/K[0][0], -(j-K[1][2])/K[1][1], -torch.ones_like(i)], -1)

yifliu3 commented 9 months ago

Hi, I found the reason. dirs = torch.stack([(i-K[0][2])/K[0][0], -(j-K[1][2])/K[1][1], -torch.ones_like(i)], -1) in fact transforms the resulted 3D coordinates (X, Y, Z) to (X, -Y, -Z). The reason is that the coordinates system that OpenCV/COLMAP used is different from the NeRF/OpenGL used, thus the transformation is required.

huahangc commented 8 months ago

Hi, I found the reason. dirs = torch.stack([(i-K[0][2])/K[0][0], -(j-K[1][2])/K[1][1], -torch.ones_like(i)], -1) in fact transforms the resulted 3D coordinates (X, Y, Z) to (X, -Y, -Z). The reason is that the coordinates system that OpenCV/COLMAP used is different from the NeRF/OpenGL used, thus the transformation is required.

I wonder why the coordinate system is important in the NeRF model? If i dont transfrom the coordinate system it will beform worse?

GauravNerf commented 5 months ago

Hi, did u try to train? without orienting to nerf coordinate system?