Open mleotta opened 3 years ago
Note that this could be closely related to the rendering mesh in camera view task, as the depth field can be directly derived from the z-buffer of the mesh rendering, distorted or not. In theory, we should be able to kill both birds with one stone: cam view mesh rendering & depth view.
In theory this is true, but practically there may be benefits to separating this processing.
The z-buffer in an OpenGL is quantized to integer values. This is good enough for depth testing, but many not be precise enough for some other applications.
Some KWIVER users with constrained hardware may not have a GPU. KWIVER already has a CPU code to render depth maps from meshes at double precision. TeleSculptor users would generally have a GPU, but making this a KWIVER algorithm opens this up to new use cases. We could have both CPU and GPU (via VTK) algorithms to trade speed and accuracy.
I'd like the rendering of the mesh in the camera view to be as dynamic as possible. That is, re-rendering the current field of view as you pan and zoom at the native screen resolution. Including rendering parts of the scene that fall outside of the image bounds. This means the z-buffer for rendering many not always cover the full image at the same resolution of the original image.
Add an algorithm to the compute menu to render the mesh to a depth image using the active camera models. Use existing code in kwiver to render mesh to depth image.