autonomousvision / mip-splatting

[CVPR'24 Best Student Paper] Mip-Splatting: Alias-free 3D Gaussian Splatting
https://niujinshuchong.github.io/mip-splatting/
Other
1.01k stars 65 forks source link

artifacts in almost every predicted image #26

Closed Ericgone closed 4 months ago

Ericgone commented 5 months ago

Hi,

thank you for your great work. but when i tried your method, I found that there are artifacts in almost every predicted image. Is it bacause that before rendering, we didn't utilize "create_fused_ply.py" to get the filtered solution? But after that I created the fused ply file and copy it into the model path and renamed it as point_cloud and I run "render.py", An error occured. So, we don't need to fuse the point_cloud.py before we render? But, as I said in the beginning, artifacts exist in almost every predicted image which is totally different from waht you showed in your paper. Could you tell me what's going wrong here? thx

best,

Yang-Xijie commented 5 months ago

related issues: https://github.com/autonomousvision/mip-splatting/issues/10 and https://github.com/autonomousvision/mip-splatting/issues/4

Ericgone commented 5 months ago

related issues: #10 and #4

hi,thank you for your reply. I found #10 has similar doubt but there is a difference. I don't understand what the "create_fused_ply.py" used for. 3D low pass filter already used in training, right? In ./scripts/run_mipnerf360.py, you sequencially run train.py, render.py, metrics.py. As far as I know, the file "metircs.py" outputs the PSNR, SSIM(name a few) values, right? You run "metrics.py" directly after running "render.py". That is, you didn't run "create_fused_ply.py". So what this file is used for? Regarding artifacts in predicted images, I just directly run "run_mipnerf360.py". So I guess I don't need to fuse anything before I obtain metrics values(like psnr, ssim, and so on ). My problem is, after running "run_mipnerf360.py" on treehill dataset, I checked the "test" folder where the predicted images are located and found the aforementioned arfifacts.

Yang-Xijie commented 5 months ago

This paper proposes two filters, for simplicity, I call them "2D filter" and "3D filter". The "2D filter" is used when rendering images. And the "3D filter" is used to represent the scene. So, the "2D filter" can be considered as a function in the rendering codes. And the "3D filter" can be considered as a property stored in the ply file.

In https://github.com/autonomousvision/mip-splatting/issues/10, the author mentioned that "the viewer expects fused point cloud as input". The viewer provided by the author is adpated from a viewer for the original Gaussian Splatting. I guess the author of Mip-Splatting only implements the "2D filter" in the viewer. So, before using the viewer provided by the author, you should run create_fused_ply.py to convert the ply saved by Mip-Splatting to the ply conforming with the ply data structure of the original Guassian Splatting (removing the filter_3D property and accordingly modifying scale and opacity).

Yang-Xijie commented 5 months ago

You don't need to run create_fused_ply.py to get the metrics. Similar with the original Gaussian Splatting, you run metrics.py to derive metrics.

niujinshuchong commented 4 months ago

Thanks for answering. I suppose the issue is addressed and therefore close it.