Open efyphil opened 4 months ago
This view seems very out of distribution with the training set. One sanity check you can do is that you can see for the trained views, do the depths there look reasonable? The angle you are showing seems to be away from the actual training.
maybe the project just use few views in the train set, so it does not preform as good as the origin-3DGauss
Ideally, few views and depth supervision on those views should present an improvement on the baseline of 3DGS -- maybe move the camera pose (that is doing the render) and show the result?
I got the similar results as posted on mipnerf360 dataset. Even method 1 seems to perform way worse than original 3D GS. I thought method 1 is supposed to be the same as the original 3D GS. Also both method 1 and method 2 are way slower (at least 5 times more time consuming) than original 3D GS.
Unclear why, but depth is predicted very poorly, just using midas is fine. I used method 1 to practice, and still got worse results than just using the vanilla version of 3dgs