Open Ralith opened 3 years ago
A simple fix (which I really hope is consistent with real-world deployments) would be for the visibility mask extension to guarantee that the topologies of the meshes are identical when used with XR_VIEW_CONFIGURATION_TYPE_PRIMARY_STEREO
. This would still require two vertex buffers to be bound, or for position data to be fetched manually from e.g. a SSBO, but reduces the effort needed from the application substantially. A more disruptive change could improve on that further by supplying a single mesh and per-view transform matrices for it, but that may not be justified.
An issue (number 1542) has been filed to correspond to this issue in the internal Khronos GitLab (Khronos members only: KHR:openxr/openxr#1542 ), to facilitate working group processes.
This GitHub issue will continue to be the main site of discussion.
someone could probably write an extension that would let you pass an extra struct in on a chain to request meshes with matching topologies. Least invasive fix.
My understanding is that some headsets would have had these vismasks pretty much handcrafted so may not be practical to emit matching topologies in all cases. though i guess theoretically its possible to produce?
as a workaround, i landed on this for an upcoming update to my xrlib built-in vulkan renderer: i used two subpasses for the stencil pass with depth write disabled, one for the left eye and one for the right eye:
The main trick however was to use two slighlty different graphics pipelines for the subpasses with the fragment shader for each eye discarding based on viewindex:
(left eye subpass)
void main() {
if (fragViewIndex == 1) {
discard;
}
}
right eye (subpass):
void main() {
if (fragViewIndex == 0) {
discard;
}
}
since we have depthwrites and tests disabled at this stage anyways, the discards shouldn't really have a perf hit.
It'd still be ideal to have a native extension solution though if all platforms are able to provide the necessary vismask topo for multiview use.
another option i considered (though havent tested) and might be another option for an ext or workaround for those who don't want any changes on their render pass, is combine the two sets of verts as/for a single framebuffer and adjusting the indices accordingly - but add a third dimension (vec2 to vec3) to the vismask verts to hold the view/eye index.
and since the final matrix calculations are typically done in the vertex shader, it'll jsut be a matter of comparing the eye index from the vertex vs the gl_ViewIndex, and if it matches, do as normal, otherwise, push the vertex off-camera (e.g. change x or z axis).
I felt the subpass solution i finally settled on (above) was probably less "hacky" and can be easily perf profiled.
may not be practical to emit matching topologies in all cases. though i guess theoretically its possible to produce?
Surely stereo headsets will always be bilaterally symmetric?
since we have depthwrites and tests disabled at this stage anyways, the discards shouldn't really have a perf hit.
I suspect you're still paying twice the stencil bandwidth strictly required here.
I suspect you're still paying twice the stencil bandwidth strictly required here.
Since it’s a subpass within the same render pass, there are some optimizations, though a slight bandwidth impact may occur from the additional vkCmdDraw call. Multi-draw benefits won’t apply in the stencil pass, but this should be minimal as the masks are usually simple shapes rather than complex 3D models.
Note: This workaround isn’t intended to replace native extension support; it’s shared here in case others find it helpful in their applications and (at least in Vulkan) not forced to create a separate render pass just for the stencils).
I agree with Rylie that adding a new extension as a struct is preferable, without imposing requirements on the platform and consequently apps (via new versions) of this extension.
may not be practical to emit matching topologies in all cases. though i guess theoretically its possible to produce?
Surely stereo headsets will always be bilaterally symmetric?
How did you end up implementing this using matching topologies?
Planar reflection of a trimesh does not change its topology.
Oh, I was asking about your final implementation with multiview in your app, especially after you manually modified the topologies. I can think of a few methods, like flipping/mirroring, using separate vertex buffers, or instancing. I just want to get a clearer picture before suggesting what a new extension/platforms should do or provide if possible.
@rpavlik - has there been any feedback from the working group on this?
If the visibility mask was guaranteed symmetric, I could draw a "regular" mesh containing the mask for one of the eyes with multiview, and add a negative x scale to the camera matrix for one eye. This would allow me to reuse my existing shaders, pipelines, vertex buffer setup, etc. For me, at least, this would be a lot simpler than rendering a different mesh to each view.
If the visibility mask was guaranteed symmetric, I could draw a "regular" mesh containing the mask for one of the eyes with multiview, and add a negative x scale to the camera matrix for one eye. This would allow me to reuse my existing shaders, pipelines, vertex buffer setup, etc. For me, at least, this would be a lot simpler than rendering a different mesh to each view.
Yeah, that does seem like the approach most applications would take, either directly or through instancing. I’m curious if there are specific cases where apps would still benefit from runtimes providing the right eye’s mask as well?
If platforms can provide this symmetric mask, it might be practical for the extension to define something like a function:
XrResult xrGetSymmetricVisibilityMaskEXT(
XrSession session,
XrVisibilityMaskTypeKHR visibilityMaskType,
XrVisibilityMaskKHR* visibilityMask )
where only the left-eye mask will be provided, and apps should handle the right eye mirroring.
Also, since visibility masks can change during runtime (via the KHR visibility mask change event), runtimes supporting this extension should guarantee to always provide a symmetrical mask throughout a session, simplifying things for developers.
If platforms cant provide a symmetric mask for certain headsets for whatever reason, the extension can also have something like:
typedef struct XrSystemSymmetricMaskPropertiesEXT {
XrStructureType type;
void* next;
XrBool32 supportsSymmetricMasks;
} XrSystemSymmetricMaskPropertiesEXT ;
Extending XrSystemProperties which apps can use to ensure whether they should use a symmetric mask pipeline or a backup / workaround one as needed.
My two cents.
I've finalized my workaround for this issue in my wip version of xrlib, which is part of an upcoming major update to OpenXRProviderV2.
This is For anyone who might come across this forum post while seeking workarounds to have multiview+vismask in a single renderpass with no post-processing of data coming from the openxr runtime—especially since this aspect isn't currently addressed in the OpenXR specification, or as a fallback pipeline in case a new extension or version is released.
As I initially outlined, I am implementing two subpasses for the visibility masks. You can see the relevant implementation here: https://github.com/1runeberg/xrlib/blob/d15057b0bbc533faa66b207bab2021b137087941/src/xrvk/render.cpp#L511
Instead of performing checks in the fragment shader however, I’ve chosen to handle them earlier in the vertex shaders. I have developed separate shaders for each eye, which adjust the vertex positions based on gl_ViewIndex, effectively moving them out of the camera view. You can find the shaders in this repository here:
https://github.com/1runeberg/xrlib/tree/main/res/shaders/src
First, background: multiview rendering broadcasts a single stream of commands to two separate layers of an array image, with shader invocations distinguished on the GPU by an index that can be used to e.g. select the appropriate view-projection matrix. This is very convenient and may improve performance for stereo rendering, as is common with HMDs.
The visibility mask API provides a distinct set of vertices and indices for each view. This makes it painful to combine with multiview, because with multiview you must bind a single index buffer/vertex buffer set and render it to both views. OpenXR provides no guarantees whatsoever about the topology of the visibility mask, so portable code cannot assume that even the number of vertices matches, let alone their connectivity. Therefore, simply uploading the provided geometry to the GPU and drawing, even selecting between two vertex inputs based on view ID in the shader, is not portable.
I've worked around this problem in my application by flattening the indexed geometry of each view into a non-indexed triangle list, padding any mismatch in number of vertices using degenerate triangles, and finally selecting between two vertex inputs using the view index. This works okay in practice, but feels like a really horrible hack.