Open jiayangshi opened 10 months ago
I think the blurriness is due to the naive initialisation in my method or the kernel size used, technically I should be able to get a super sharp image from this method.
The performance related issue I think it mainly comes down to not using a differentiable renderner and purely implementing and relying on pytorch cuda for Gaussian kernel creation, we recently wrote a python module as the first step towards writing a full on differntiable renderer https://github.com/OutofAi/cudacanvas but until that's done I don't think there is much more performance improvement that I can get, someone else did a performance study regarding this proving to some extend that a differntiable renderer could potentially imporve performance significantly https://github.com/OutofAi/2D-Gaussian-Splatting/issues/2#issuecomment-1871791703
also in terms of memory, I am currently using significant amount of extra backup points, which are not really needed considering the image converge with around 3000 points at the end, you can probably reduce the backup_samples: 4000 to 2000 for less memory usage
Perhaps using 2D Gaussians for fitting 2D images is similar to RBF, similar to the effect in https://arxiv.org/pdf/2006.09661.pdf, unless more kernels are used.
I also find this issue very interesting. If anyone has conducted a comparative analysis, please let me know.
Hi, thank you for your great work!
I have a question regarding the performance. In the example case of fitting 2D Gaussians to a single image, even though provided with the ground truth (the single image), the image created by 2D Gaussians still contains quite some blurry and missing details.
In contrast, when fitting to Implicit Neural Representations (INR), the INR can represent the image eventually near perfectively. Would you happen to have any insights, on why image fitting stays difficult for 2D Gaussians Splatting? Or how can we squeeze out the best performance for it? Thank you!