Totoro97 / f2-nerf

Fast neural radiance field training with free camera trajectories
https://totoro97.github.io/projects/f2-nerf/
Apache License 2.0
933 stars 69 forks source link

eliminate floating #61

Closed Bin-ze closed 1 year ago

Bin-ze commented 1 year ago

Thanks for your new perspective and great work! I tried to use f2-nerf for large scene reconstruction and got good results.

But there will be floating on some paths, which affects the final performance.

Do you have any ideas for eliminating floaters? I tried parallax regularized loss, but it makes the geometry worse.

Can you give me some advice, dear author!

Totoro97 commented 1 year ago

Hi, there can be several reasons for the floaters, and you may first want to check if the camera poses are correct.

In addition, I just added a gradient scaling strategy to penalize gradients of near samplings, which might help reduce the floaters. You can try the latest commit using the config wanjinyou.yaml or wanjinyou_big.yaml.

Look forward to your feedback😃

Bin-ze commented 1 year ago

Is the cause of floating related to the algorithm's octree-based space division strategy? I observed the result, the value of the estimated depth of the position where the floating object is generated is invalid, I am a bit confused why this is the case, can you give some advice?

image
Bin-ze commented 1 year ago

I use gradient scaling strategy and it help me reduce the floaters, but new problem arises,The depth map becomes discontinuous, and there will be holes on the border close to the camera. This should be caused by the gradient clipping strategy. How to adjust it can alleviate such problems?

image
Totoro97 commented 1 year ago

Hi, I think this effect is in expectation. The depth values are difficult to correctly be estimated for the textureless property of the white walls. Maybe adding depth constraints using some pre-trained depth estimation networks can help. I feel sorry that at this stage I am not able to think of a simple quick solution to alleviate such problems.

Bin-ze commented 1 year ago

Thanks for the reply, but this behavior occurs after adding gradient scaling, before that, the depth of the wall looks fine. The previous problem was that in a certain section of the rendering path, there would be some floating objects, as shown in: image I added gradient scaling, the previous floating objects disappeared, but the wall close to the camera became fragmented, how to control the value of gradient scaling becomes close to 1 as it moves away from the camera position, maybe it can be adjusted manually to reduce the impact on the wall Impact

Bin-ze commented 1 year ago

How does gradient scaling regularize samples close to the camera? I checked the implementation:

  __global__ void GradientScalingBackwardKernel(int n_rays, int c, float progress, float* rand_val, int* idx_start_end, float* out_vals) {
    int idx = LINEAR_IDX();
    if (idx >= n_rays) return;
    int idx_start = idx_start_end[idx * 2];
    int idx_end   = idx_start_end[idx * 2 + 1];
    for (int i = 0; i + idx_start < idx_end; i++) {
      float a = (float(i) + .5f) / float(idx_end - idx_start);
      float cur_scale = progress + (1.f - progress) * a * a;
      for (int j = 0; j < c; j++) {
        out_vals[(i + idx_start) * c + j] *= cur_scale;
      }
    }
  }

Calculate the scaling factor here:

float cur_scale = progress + (1.f - progress) a a;

It gradually decreases to 1 as a increases, and a represents the normalized distance of the sampling interval on the light.

Does it mean that this strategy will give a larger gradient to the sampling points close to the camera, so that the model pays more attention to the part? @Totoro97 Sorry to bother you, can you explain the meaning of each variable in this code? I want to design a new scaling factor so that the sampling points close to the camera have a small gradient, but fast decay, so as to overcome the previous problem

Totoro97 commented 1 year ago

Does it mean that this strategy will give a larger gradient to the sampling points close to the camera, so that the model pays more attention to the part?

Hi, this strategy will give smaller gradients to the sampling points close to the camera and larger gradients to the far sampling points, so that the far parts can be more easily learned to reduce the floaters of near parts. The floaters are mainly due to that the near parts (sometimes it is too near if the preset near-far bounds of rays are very loose) are first learned and overfit the image.

maybe it can be adjusted manually to reduce the impact on the wall Impact

Maybe one way is not to use the gradient scaling strategy but preset more tight near-far bounds of the rays. Specifically, disabling the gradient scaling strategy by setting train.gradient_scaling_start=0 and train.gradient_scaling_end=0, and increasing the near bound of rays by increasing pts_sampler.near.