Closed morallo closed 1 year ago
@Prof-Butts I updated the description with a lot more info.
Also, I was thinking that the old strategy of doing the full scene rendering at the end of the frame does not make sense now that the 3D transformations are hooked. The scene graph traversing is done by XWA, and it will call ddraw to render each object. In fact the "full multipass" (full left eye, then full right eye) will actually be more complicated. The natural way is to implement either single pass stereo with instancing/arrays or double-wide at each draw call for each object.
Latest almost-working implementation bf42eb89d6b12c4d6aa376193d3f209941e63b2f
Pending fixes:
Hi @Prof-Butts, I tested again and it seems with the latest changes MSAA is working fine. Also, I verified that the engine glow misalignment and the stretching in the Tech Room is also happening in the current non-instanced version. It is probably due to the D3dRendererHook changes.
So, this could be merged after some testing on your side.
Implemented in #87
It can improve performance for GPU-intensive scenes or effects. However, this is probably not the current priority until the CPU bottleneck can be addressed.
A good intro to the different stereo rendering strategies: https://blog.unity.com/technology/how-to-maximize-ar-and-vr-performance-with-advanced-stereo-rendering And a video to see it more visually: https://youtu.be/datOOos-944?t=157
The best method for performance is Stereo Instancing using an texture arrays with one slice per eye, with a single instanced draw call to draw in both eyes. The GPU instancing hardware support optimizes the GPU cost.
When VPAndRTArrayIndexFromAnyShaderFeedingRasterizer extension is available (Win10, modern gfx drivers), it's possible to use the eye index directly in the vertex shader to apply the right view and projection matrices, so the implementation is very simple.
If that is not supported, there are 2 alternatives using double-wide rendering:
Using GPU instancing and dynamic clipping, as proposed here: https://docs.google.com/presentation/d/19x9XDjUvkW_9gsfsMQzt3hZbRNziVsoCEHOn4AercAc/htmlpresent
Duplicate the draw calls per object, one for each eye, but doing one single pass through the geometry, which is what Unity uses. There is an overhead of changing the Viewport once every 2 draw calls (left,right,right,left,left,right...).The main benefit of this technique in the current state of XWA is to reduce the number of draw calls.I believe most people that can play in VR should have a modern GPU and I believe the effects don't currently work in Win7, so probably we should try to implement Stereo Instancing with texture arrays.
An example app on how this is implemented in DirectX: https://docs.microsoft.com/en-us/windows/mixed-reality/develop/native/rendering-in-directx#render-to-each-camera
With the code: https://github.com/microsoft/Windows-classic-samples/tree/27ffb0811ca761741502feaefdb591aebf592193/Samples/BasicHologram/cppwinrt/Content
A more modern example, but using OpenXR: https://github.com/microsoft/OpenXR-MixedReality/blob/main/samples/BasicXrApp/CubeGraphics.cpp#L220
This is quite beyond my 3D graphics actual implementation skills, but I hope you have enough info @Prof-Butts. Don't hesitate to ping me to discuss!