Closed zmerp closed 3 years ago
Thank you for good work! Can you estimate the processing time on standard environment? I can't tell that this approach can really reduce latency or improve graphics without trying it. So we should experiment by embedding it in ALVR.
I'm also busy now. I will try it when I have time.
There are quite a few trigonometric function used but some operations can be precomputed; the number of texture samplings is low and there is no branching. Based on my previous experiences I predict all shaders combined to take ~1ms on pc and max 2ms on device. I can help no sooner than mid july because I'm busy with university.
Thank you. If that estimation is right, added latency will be negligible.
Hello,
I haven’t written any code yet (I’ll start in a month), but I planned the necessary changes to add the foveated rendering. I have some code that I use for shader prototyping, I will port it to DirectX11 and OpenGLES to integrate it into ALVR.
Recently I have discovered that DirectX11 supports command lists (i.e. preprogrammed pipelines) which I think would be particularly useful for series of low complexity rendering passes as my foveated rendering algorithm and for here. Maybe these are micro-optimizations, but they shouldn’t be too difficult to implement.
As for the client part, would you like me to use OpenGLES 2 or 3? I have code almost ready for OpenGLES 3, the advantage over 2 is that the vertices and index buffers can be defined directly inside the shaders (for example for rendering a quad) so there are fewer GL calls and less latency. Switching to Vulkan would be even better with its pipelines but fewer devices support it and, again, it would only be for micro-optimizations.
I want to share a concept of a foveated rendering system. I had the idea a few years ago while I was working on a project like ALVR (but I abandoned it due to lack of time). After reading in #287 about fixed foveated rendering I decided to implement my idea and share it with you.
Here is the link to the demo I wrote: https://www.shadertoy.com/view/3l2GRR
Briefly, the algorithm consist of a chain of shader passes done on the pc and the client device, where every pass uses the output of the previous. They are, in order:
On pc:
On device:
I did not do any testing on any vr device, but from how the demo turned out I can write some pros and cons, compared to a FFR system that slices the frames into smaller images and render them with different resolutions.
Pros:
Cons:
I would like to hear your opinion on incorporating this algorithm into ALVR. I would really love to contribute myself but right now I'm really busy. Anyway I can help with some explanations if needed.