autonomousvision / gaussian-opacity-fields

[SIGGRAPH Asia'24 & TOG] Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes
https://niujinshuchong.github.io/gaussian-opacity-fields/
Other
707 stars 36 forks source link

Foreground extract #77

Open Kairui-SHI opened 2 months ago

Kairui-SHI commented 2 months ago

Hi, thanks for your great work. I noticed that you recommend below to set a bbox to select gs points to save. May I ask can I use some method like near plane far plane if I have depth images to extract foreground? How should I set the params or I need to write one?

"Otherwise if you only want to extract the mesh for the foreground region, you can define a bbox for the object and only use the gaussian within the bbox for mesh extraction by changing the code here https://github.com/autonomousvision/gaussian-opacity-fields/blob/main/scene/gaussian_model.py#L379-L384." Originally posted by @niujinshuchong in https://github.com/autonomousvision/gaussian-opacity-fields/issues/37#issuecomment-2124196291

niujinshuchong commented 1 month ago

Hi, currently the near-plane is hard coded in the cuda code https://github.com/autonomousvision/gaussian-opacity-fields/blob/main/submodules/diff-gaussian-rasterization/cuda_rasterizer/auxiliary.h#L27. I think you could try to project all the gaussian centers to the cameras and only the gaussian with the near and far plane when you generate the tetrahedal grids. You can change the code here: https://github.com/autonomousvision/gaussian-opacity-fields/blob/main/scene/gaussian_model.py#L392-L397

Kairui-SHI commented 1 month ago

I see. That's true one method to extract mesh use near-far plane. Now always we need to use w2c to project all gs points to one camera view. Thanks for your help!