I'm implementing some light-field based neural network which have different input structure for MLP.
The MLP outputs the rgb color and depth values which are in range [0, 1] in NDC coordinates by sigmoid function to corresponding ray.
However, my questions is follows
For NDC coordinates, if I correctly understand, the scene bounded in near(0) and far(1) plane. Then, do all components exist between near and far plane?
As we can see the get_rays functions, z value is instantiate with -1. I can sure that camera origin is world coordinates from the evidence that c2w[:3,-1] part in following code. As I understand, the rays are still in world coordinate since the magnitude of direction vector doesn't matter. Am I right?
I get some depth value and compute the corresponding points using follow code. I intend the code works regardless of the coordinate system, NDC or world. z_vals contains N_rays x [near, far] values. Is this code right..?
If points are correctly computed in given ray with some depth values, how to compute the real depth value when NDC coordinate system is used?
For example, if I have P(Xp, Yp, Zp) in NDC coordinates, following code can back project to real depth value?
I use same coordinate system to NeRF, (right, up, backward), in world and I guess NDC coordinate system is (right, up, forward).
I read about the issue you raised in NeRF original GitHub, but I think it still not work properly for my code.
For given ndc_pts, which have N_rays x (Xp, Yp, Zp), we back-project ndc_pts to world following your issue.
But what I found in your GitHub issue, which refer the necessary of back project with K^-1 * d * (x y 1).
For depth direction, I think depth_real = 1/(1-depth_ndc) is different because of the different coordinate system.
Then, does K^-1 * d * (x y 1) mean same process with the mentioned code from NeRF author?(which I used).
Thanks to read my questions.
If you reply to this issue, it will be really helpful...!
Hi, I'm appreciate to your great work!
I'm implementing some light-field based neural network which have different input structure for MLP.
The MLP outputs the rgb color and depth values which are in range [0, 1] in NDC coordinates by sigmoid function to corresponding ray.
However, my questions is follows
For NDC coordinates, if I correctly understand, the scene bounded in
near(0)
andfar(1)
plane. Then, do all components exist between near and far plane?As we can see the
get_rays
functions, z value is instantiate with-1
. I can sure that camera origin is world coordinates from the evidence thatc2w[:3,-1]
part in following code. As I understand, the rays are still in world coordinate since the magnitude of direction vector doesn't matter. Am I right?I get some depth value and compute the corresponding points using follow code. I intend the code works regardless of the coordinate system, NDC or world. z_vals contains
N_rays x [near, far]
values. Is this code right..?If points are correctly computed in given ray with some depth values, how to compute the real depth value when NDC coordinate system is used? For example, if I have P(Xp, Yp, Zp) in NDC coordinates, following code can back project to real depth value? I use same coordinate system to NeRF, (right, up, backward), in world and I guess NDC coordinate system is (right, up, forward).
I read about the issue you raised in NeRF original GitHub, but I think it still not work properly for my code.
For given
ndc_pts
, which haveN_rays x (Xp, Yp, Zp)
, we back-projectndc_pts
to world following your issue.But what I found in your GitHub issue, which refer the necessary of back project with
K^-1 * d * (x y 1)
.For depth direction, I think
depth_real = 1/(1-depth_ndc)
is different because of the different coordinate system. Then, doesK^-1 * d * (x y 1)
mean same process with the mentioned code from NeRF author?(which I used).Thanks to read my questions. If you reply to this issue, it will be really helpful...!