Open Die4Ever opened 5 years ago
would be interesting to benchmark 25% VRS vs changing the screen resolution to 25% vs using 25% subsampling, I think VRS should lose this benchmark (if it's always subsampling, then it would be a good idea to reallocate the buffers, or keep a 2nd allocation of buffers for sub-native and choose which buffers to use for each frame? if the buffers are below native size then do not use VRS supersampling, only use VRS for further subsampling or maybe don't use VRS at all below native resolution?)
and also benchmark 400% of each, I think VRS should win this benchmark
post processing AA could take the VRS data as input so it knows not to apply AA to super sampled pixels? or it can try to take special care of subsampled pixels?
I can use VRS per object to render ray traced reflective objects (especially non-smooth ones) at a lower resolution? or maybe just reduce the ray count?
use this to determine the maximum resolution for allocating buffers, saves vram
dynamic pixel count will use dynamic resolution and also variable rate shading, and maybe MSAA too?
https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/
https://developer.nvidia.com/vulkan-turing
"The shading rate can be as coarse as one fragment shader for each 4x4 block of pixels or as fine as launching 16 fragment shader invocations per pixel."
supersampling using VRS should be way more VRAM efficient than using large render targets? that means that even 1600% render scale could be done entirely with VRS with natively sized buffers?
objects with transparency/dithering would be biased towards 1x1 VRS or better
objects in the distance or inside of DoF or motion blur could be biased towards 1x2 or 2x2 VRS
need a setting for minimum desired frame rate and maximum fps, do I also need target fps?
for each frame it would calculate how many pixels it wants to render (no more than the maximum render scale) and then scale the VRS data to try to match the desired pixel count