Closed skywolf829 closed 7 months ago
Started work with an off-screen PyOpenGL + GLUT (invisible) window which can be rendered to. The idea is to use that canvas to draw the OpenGL related items to before the model forward render. Then, the OpenGL rendered data (RGB + A + Depth) are all passed into the GS rendering backend with compositing working properly given the other OpenGL render. Splats in front of the depth buffer will splat as usual, but after the depth buffer will not be splatted. Instead, the RGBA from the OpenGL render will be composited, and splatting will only continue if the alpha is not saturated yet. Should work well with only a little overhead on backend.
So far I can render a cube outline using PyOpenGL to a hidden window and also obtain the depth buffer (normalized 0.0-1.0 for near/far plane), and obtain both these arrays as numpy/standard float arrays for passing to the GS CUDA backend.
Updates:
glLoadMatrixf
to load the exact modelview matrix and projection matrix, so the OpenGL camera has the exact same parameters as the gaussian splatting one. Had to reverse the z-sign for the projection matrix, seems GS and OpenGL use opposite z-signs (camera forward)render_cam
object we use to render the GS model. Should be trivial to add opengl_renderer.render(render_cam)
in the main render thread.Updates:
In order to render some effects such as camera locations, current selector area, or even other objects, we want to allow other rendering to mesh seamlessly with the GS forward render. This issue is the backend work to support rendering standard OpenGL and allowing the model to still be rendered in that same scene.
Will be used for other issues like #14 #28 #26 etc.