NJU-3DV / Relightable3DGaussian

[ECCV2024] Relightable 3D Gaussian: Real-time Point Cloud Relighting with BRDF Decomposition and Ray Tracing
https://nju-3dv.github.io/projects/Relightable3DGaussian/
Other
376 stars 24 forks source link

get normal from depth #18

Closed JiatengLiu closed 4 months ago

JiatengLiu commented 4 months ago

Your dataset has ground truth of normal, but README.md does not mention how to generate the normal through the depth map, could you please provide the corresponding script? thanks

YuxueYang1204 commented 4 months ago

Your dataset has ground truth of normal, but README.md does not mention how to generate the normal through the depth map, could you please provide the corresponding script? thanks

@JiatengLiu You can refer to the cuda here https://github.com/NJU-3DV/Relightable3DGaussian/blob/77827626db514d59b390da2474de5d71be61a2e6/r3dg-rasterization/cuda_rasterizer/forward.cu#L425

JiatengLiu commented 4 months ago

@YuxueYang1204 I know that, but anyway, it only pseudo normal and in my experiments it is wrong sometimes.for example the pseudo normal is in opposite directions, is this the case I am talking about?

YuxueYang1204 commented 4 months ago

@JiatengLiu I haven't run the code yet, but I'm confused that why the authors convert normal.x by viewmatrix[0] * normal[0] + viewmatrix[1] * normal[1] + viewmatrix[2] * normal[2] in https://github.com/NJU-3DV/Relightable3DGaussian/blob/77827626db514d59b390da2474de5d71be61a2e6/r3dg-rasterization/cuda_rasterizer/forward.cu#L485 , where viewmatrix has been transposed. I think it may be viewmatrix[0] * normal[0] + viewmatrix[4] * normal[1] + viewmatrix[8] * normal[2] as like https://github.com/NJU-3DV/Relightable3DGaussian/blob/77827626db514d59b390da2474de5d71be61a2e6/r3dg-rasterization/cuda_rasterizer/auxiliary.h#L152

As for the opposite direction, you can calculate the dot product of the normal and the view direction from the gaussian (or the pixel in world coordinate) to the camera's center to determine if it needs to be reversed.

JiatengLiu commented 4 months ago

thanks for your reply! I will try latter

--------------原始邮件-------------- 发件人:"Yuxue Yang @.>; 发送时间:2024年4月16日(星期二) 晚上6:19 收件人:"NJU-3DV/Relightable3DGaussian" @.>; 抄送:"Jett @.>;"Mention @.>; 主题:Re: [NJU-3DV/Relightable3DGaussian] get normal from depth (Issue #18)

@JiatengLiu I haven't run the code yet, but I'm confused that why the authors convert normal.x by 我还没有运行的代码,但我很困惑,为什么作者转换normal.x通过viewmatrix[0] normal[0] + viewmatrix[1] normal[1] + viewmatrix[2] normal[2]视图矩阵[0]正常[0]+视矩阵[1]正常[1]+视矩阵[2]正常[2] in在 https://github.com/NJU-3DV/Relightable3DGaussian/blob/77827626db514d59b390da2474de5d71be61a2e6/r3dg-rasterization/cuda_rasterizer/forward.cu#L485 , where ,在哪里viewmatrix视图矩阵 has been transposed. I think it may be 被调换了位置。我想可能是viewmatrix[0] normal[0] + viewmatrix[4] normal[1] + viewmatrix[8] normal[2]视图矩阵[0]正常[0]+视矩阵[4]正常[1]+视矩阵[8]正常[2] as like 就像https://github.com/NJU-3DV/Relightable3DGaussian/blob/77827626db514d59b390da2474de5d71be61a2e6/r3dg-rasterization/cuda_rasterizer/auxiliary.h#L152

As for the opposite direction, you can calculate the dot product of the normal and the view direction from the gaussian (or the pixel in world coordinate) to the camera's center to determine if it needs to be reversed.对于反方向,您可以计算从高斯(或世界坐标中的像素)到相机中心的法线和视图方向的点积,以确定是否需要反转。

—谢谢 Reply to this email directly, 直接回复这封邮件,view it on GitHub在GitHub上查看, or ,或unsubscribe取消订阅. You are receiving this because you were mentioned.你收到这个是因为你被提到了。Message ID: @.***>