Open maliksyria opened 1 year ago
I'm sorry for tagging you @Totoro97 . Could you please take a look? What do you think this issue is related for? Is it related to warping the space or to the perspective sampling strategy?. For all scene I'm having this radial distorted point cloud
For those who has this problem, during rendering mode, comment out this line https://github.com/Totoro97/f2-nerf/blob/68755d3f4b7108b0750d4cac1932c2f2dae5c8ac/src/PtsSampler/PersSampler.cu#L319 Another normalization on rays' direction before sampling is causing this issue.
For those who has this problem, during rendering mode, comment out this line
Another normalization on rays' direction before sampling is causing this issue.
I am also doing something similar, using the ngp_fox data set for testing, and I found that even if I comment out this line of code, the results of using fb/depth (render_result.depth) still cannot match gt. Do you have any corresponding questions? I have a question, is the depth rendered here a distance map instead of a depth map?
@xubin1994 what is the exact issue ? And from where you got the depth GT for ngp_fox dataset?
@xubin1994 what is the exact issue ? And from where you got the depth GT for ngp_fox dataset?
As shown in my picture, I used b = 0.5 to render a pair of left and right images. Then I used some SOTA stereo matching networks (RAFT) to estimate the corresponding disparity as gt_disp. At the same time, I used the rendered depth and the corresponding focal length. The disparity nerf_disp = (fb /depth) is calculated, but the difference between nerf_disp and gt_disp is a relatively large scale.
The rendered depth you are taking from left or right image? It must be the left one Have you tried to see how the point cloud of nerf_depth = 1/nerf_disp looks like by using Open3D for example? @xubin1994
The rendered depth you are taking from left or right image? It must be the left one Have you tried to see how the point cloud of nerf_depth = 1/nerf_disp looks like by using Open3D for example? @xubin1994
yes, i used the left one.Why use nerf_depth = 1/nerf_disp? What does this mean? I used gt_disp to convert the point cloud, and the result was correct and without distortion.
I meant to double check how the depth (not disparity) rendered from F2 after your calculations look like ?
Hello! Thanks for this very good work! I have a question, I'm rendering some scenes and both of rendering and disparity look very good. The idea that I'm rendering a stereo pair - One is the rendered image of the dataset (as left image), and another one is an another rendered image moved on x axis by a virtual baseline value
b
(as right image). I'm taking disparity value from this lineTensor disparity = FlexOps::Sum(weights / sampled_t, idx_start_end);
The issue that, when I render the disparity using that baseline 'b' and focal length calculated by COLMAP, I'm getting visually excellent results but odd numbers. Then I'm un-normlizing the disparity by the following:Tensor pred_disps_nn = (pred_disps / ( - pred_disps * dataset_->center_[2].to(torch::kCPU) + dataset_->radius_)).contiguous();
Is this a correct formula to un-normalize disparity?For example I'm having the following image:
F2-NeRF generated this disparity:
The error image between generated disparity and ground truth looks like this.
This image means, that in the center of the image and around it having almost zero error (which is perfect). But for some reason, the error gradually getting more by effect of a circle in the direction of the corners.
While the point cloud for this image looks like this:
If we looks closely to the ground, we see how it twisted, which shows the effect, I suppose, of warping function not returning back to euclidean space, or it cloud be from another reason?
I'm facing this in all scenes, is it a normal behavior? or it could be related to the perspective warping or sampling algorithms presented in this paper ?