NVlabs / instant-ngp

Instant neural graphics primitives: lightning fast NeRF and more
https://nvlabs.github.io/instant-ngp
Other
16.07k stars 1.94k forks source link

renderring cloud points turns to be ghosting #922

Open xiepeter opened 2 years ago

xiepeter commented 2 years ago

we tried to get the rendering result from m_nerf.tracer.rays_hit(), and calculated depth , color and normal by ray_o and ray_d, and then used pcl library to restore. All is OK, but when we try to concat cloudpoints from different views, it turned to be ghosting. we wonders it may have three possibility:1) camera outer matrix from colmap is not very accurate; 2) m_camera is influenced by mouse direction or drag; 3)depth is not calculated very accurately. PLS HELP! color and depth calculation is as follows:

` global void compute_point( const uint32_t n_hit, Array4f rgba, // from m_nerf.tracer.rays_hit().rgba float depth, // from m_nerf.tracer.rays_hit().depth NerfPayload payload, // from m_nerf.tracer.rays_hit().payload Array7f point // xyz + rgba, normal is not calculated here , but by pcl lib ){ const uint32_t threadId_2D = threadIdx.x + threadIdx.yblockDim.x; const uint32_t blockId_2D = blockIdx.x + blockIdx.ygridDim.x; const uint32_t i = threadId_2D + (blockDim.xblockDim.y)blockId_2D;

if(i >= n_hit) return;
Eigen::Vector3f origin = payload[i].origin;  
Eigen::Vector3f dir = payload[i].dir;
Eigen::Vector3f xyz = origin + dir * depth[i];
Array4f rgba_tmp = rgba[i];
point[i] << xyz(0), xyz(1), xyz(2), rgba_tmp(0,0), rgba_tmp(1,0), rgba_tmp(2,0), rgba_tmp(3,0);

` two cloud points from different views were concated, but not matched perfectly, seems camera matrix is not right.. image ghosting fox 02

xiepeter commented 2 years ago

problem solved. the reason is the difference between m_camera and ray_o, which is not the same direction: code is as follows:

float costheta = dir.dot(cam_fwd) / (dir.norm()*cam_fwd.norm()); depth[i] = depth[i] / costheta; Eigen::Vector3f xyz = origin + dir * depth[i]; image

xiepeter commented 2 years ago

but normal is not very right in the junction of two cloudpoints, which may have opposite direction, don't know how to solve

pellethielmann commented 2 years ago

Hi, I have a question. You said you got the rendering result from _m_nerf.tracer.rayshit() and calculated from here the depth values and so on for each ray. Does this mean the rendering result gives you the sampled ray back? Does this come with the payload?

I'm struggling with an issue #1065, maybe you can help me here.

Mehi44 commented 1 year ago

Hi there, I am also trying to generate a point cloud from a trained model, but I cant understand how you did it ? would you kindly explain where to find this m_nerf.tracer.rays_hit() function ? and how to use it to generate point clouds ?