NVlabs / CF-3DGS

Other
394 stars 43 forks source link

What is the core of the algorithm? #28

Closed Bin-ze closed 1 month ago

Bin-ze commented 3 months ago

What is the core of the algorithm?

Great work! But I find it hard to understand why the algorithm works stably by reading the paper. My specific questions are as follows:

  1. Camera tracking comes from local keyframe photometric error pose optimization, but I have tested that the process seems to have difficulty in obtaining a robust pose, especially when the viewpoint moves a lot and the scene is weakly textured. Once the pose estimation fails, I don't understand how the subsequent incremental process can correct the problem instead of reducing the loss by overfitting

  2. Viewpoint interval, I am curious, what is the viewpoint interval of the algorithm trying various data sets, such as how many frames per second?

  3. Reconstruction range, the paper mentions that densification will occur quickly along the direction of the new viewpoint that is not covered. So if a scene is not centered on the object. Can the algorithm migrate the point cloud from the initial frame to a far distance through optimization? Have you tried this?

Thanks again!

Vincento-Wang commented 3 months ago

What is the core of the algorithm?

Great work! But I find it hard to understand why the algorithm works stably by reading the paper. My specific questions are as follows:

  1. Camera tracking comes from local keyframe photometric error pose optimization, but I have tested that the process seems to have difficulty in obtaining a robust pose, especially when the viewpoint moves a lot and the scene is weakly textured. Once the pose estimation fails, I don't understand how the subsequent incremental process can correct the problem instead of reducing the loss by overfitting
  2. Viewpoint interval, I am curious, what is the viewpoint interval of the algorithm trying various data sets, such as how many frames per second?
  3. Reconstruction range, the paper mentions that densification will occur quickly along the direction of the new viewpoint that is not covered. So if a scene is not centered on the object. Can the algorithm migrate the point cloud from the initial frame to a far distance through optimization? Have you tried this?

Thanks again!

you can read some paper of unsupervised mono depth estimation, there have your answer.