Open dklassic opened 1 year ago
Hi,
Making FSR2 work with stacked cameras is a rather tricky problem, for several reasons. First and foremost, the way upscaling is handled in the built-in render pipeline integration is a bit of a hack. It relies on undocumented and not very clearly defined behavior from the black box that is Unity's built-in render pipeline. That it even works at all (and is quite reliable and portable) is already something of a miracle, more luck than anything to be honest.
I'm just going to list a few ideas and think out loud about the consequences of each:
It's an interesting conundrum that I might look into further, since it has been mentioned by several people already as an obstacle to using FSR2. The last bullet point I think is what we should be looking to accomplish, but there are some hurdles to overcome. Hopefully to be continued...
- What you probably want ideally is for both cameras to render their stuff at the lower internal render resolution, combine those two images, and then apply FSR2 only once on the combined image. This requires some coordination between the two cameras, with both having to apply the same resolution scale and jitter offset but only one of them doing the FSR2 dispatch in the post-processing phase. It might also get tricky to combine the depth buffers and motion vectors from both cameras. That would require some custom materials and blitting passes, I think.
Nice read there, thanks for sharing! I know it's quite a hassle to work with the black box of BIRP, and I really appreciate your attempt of implementing FSR2 in BIRP. I have my fair share of issues implement dynamic resolution with my setup, so I'd imagine it would only be an even greater task to make FSR2 work on stacked cameras.
Again, thanks for your thoughts, I might try my hands on this once I wrapped up some crucial parts of my project.
I did a little experiment and as expected, FSR2 upscaling works fine if you have the bottom camera only apply the resolution scale and jitter offset, and leave the top camera to do the upscaling and reconstruction. Depth and motion vectors aren't even a problem, because the stacked cameras will render to the same output buffers.
The problem is more with things like the reactive mask and other post-processing effects. To properly deal with transparencies and texture animations, the reactive masks of both cameras would have to be combined. The auto-reactive mask pass doesn't allow input from a previous pass, so it can't be easily chained, while the auto-TCR pass does but as it's part of the full FSR2 dispatch you can't easily run it separately on a camera without doing upscaling as well. You could combine the outputs of multiple auto-reactive passes manually, but what is the right combination op here (add, max, replace, blend)? Ideally the application itself outputs a reactive mask from its shaders, but then it becomes the responsibility of the game developer.
As for post-processing, you wouldn't want the bottom camera to run certain post-FX that don't play nicely with upscaling (e.g. motion blur, film grain) and rather defer those tasks to the top camera, to be run after FSR2 upscaling. Ideally only the top camera should perform post-processing, but that may not be desirable depending on what the camera's purpose is. What is 'correct' here very much depends on what each camera is for and how they're used by the application.
I guess what I'm trying to say is that it's very hard to make a one-size-fits-all solution for this. So many parts of this are application-specific, that it's rather unavoidable that this has to be set up and customized differently for each game.
I'll look into this a bit further. I'll probably end up writing a component that applies only viewport scaling and jitter offset to a camera based on an FSR quality mode. I might also write something to facilitate combining reactive masks. But I think that's about as much as I can do for a generic solution.
Not quite a direct update on this issue, but I myself decided to use some workaround which is somewhat related to one of the issues you mentioned in readme of this repo, so I figure it's best to give you some info on this matter.
So in my game, I use a three camera setup:
Since I wanted to add in FSR2 to lower the load more than to use its super sample/anti-aliasing feature on the full image, I figure it would be enough for me to just apply FSR2 on the Background camera alone (since it drew most of the elements and would already pose a big performance boost).
But there's this same problem like you stated in your readme that bothers me since I implemented my DRS system:
Unity also offers a dynamic resolution system (through ScalableBufferManager) which would be a good fit for upscaling techniques, but the problem there is that it's not supported on every platform. In particular DirectX 11 on Windows does not support ScalableBufferManager, which is a deal breaker. Additionally ScalableBufferManager has a long-standing issue where UI gets scaled as well, even if the UI camera is set not to allow dynamic resolution.
I initially gave up and switch my UI elements to Screen Space - Overlay
whenever DRS is active, but I recently found out that Built-in RP's black box seems to automagically link camera into actual stacks so that they will scale together in rendering pipeline, which is why both Allow Dynamic Resolution
and rect scaling on the bottom camera will cause the scaling of all other cameras as well.
One way that I know of to break the automagical link is to slightly set cameras to have different Viewport, as in W=1, 1.000001, 0.999999 for my three cameras in my case. With this I was able to perform scaling on only the bottom camera and being able to perform DRS without impacting the UI. And on that note, also being able to stuff your FSR2 implementation into my game.
Just figure you might find the information useful, so here I am!
anyways, some results from my FSR2 setup:
720p native (output 4K) 720p FSR2 (output 4K)
FSR2's already upscaled the image good enough, and while it is far from 4K like, with an additional gameplay camera on top of it rendering with 4K native, it is seriously hard to notice during gameplay.
Hi, just learned the existence of this project. Great job on this, though I was wondering if there's any workaround for FSR2 to work on a dual camera setup? My project is setup in a way that the scene needs to be rendered out with two separate cameras on top of each other. I've noticed that I cannot enable FSR2 on the top camera, for there will be error reporting color dimension mismatch with depth. On the other hand, if I use it on the bottom camera, it will cause the top camera to be rendered at a lower resolution but without any reconstruction.
Is there any possibility to make this work?