Open rebecca-lay3rs opened 7 months ago
Hello @rebecca-lay3rs,
Thank you so much for your nice words!
Indeed, I've checked this paper! You're right, this is a very interesting suggestion, thanks for that. I'm certain SuGaR mesh extraction process can benefit from such strategies. We will definitely explore this direction.
Also, for this specific problem of background, you're right, the current version of SuGaR has trouble with images of segmented objects, as flat monochrome backgrounds create monochrome artifacts (basically, SuGaR tries to reconstruct the background as a monochrome surface, which is actually a pretty dumb problem haha, we should just tell SuGaR not to consider this part of the image). I actually (more or less) fixed this problem with a very simple solution and will push a new version of the code in (I hope) a few days that can handle images of segmented objects (i.e. with masked background).
This fix could probably improve the quality of your reconstruction!
Hi Antoine!
Thanks for the really great work!
I would like to ask if you have checked this paper "Binary Opacity Grids" (https://arxiv.org/pdf/2402.12377.pdf)? They also use an opacity threshold to identify the surface, and they further complement this by "filtering out" the wrong opacity predictions using volumetric fusion (by re-projecting the opacities on 2D depth maps, Fig 3 and Appendix A of the paper).
Have you thought about implementing some of this paper's features on SuGaR? I was wondering if this could help filtering the bad gaussians that we obtain when using objects without background such as:
Thank you!