godotengine / godot-proposals

Godot Improvement Proposals (GIPs)
MIT License
1.12k stars 69 forks source link

Create a more powerful and customizable workflow for custom post-processing effects #2196

Open Arnklit opened 3 years ago

Arnklit commented 3 years ago

Describe the project you are working on

Various 3D projects

Describe the problem or limitation you are having in your project

The current workflow for adding custom post-processing cumbersome, cannot be previewed in editor camera and is limited to being applied after all built-in post-processing it also makes it complicated to set up game settings for the user to enable / disable custom post-processing effects. I assume it also ends up using new buffers for each effect added on, making it more performance expensive than if it could reuse buffers in the built-in post-processing stack.

Describe the feature / enhancement and how it helps to overcome the problem or limitation

I'd like the ability add custom post-processing in the WorldEnvironment node and choose which order it gets applied in the stack.

Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams

I could imagine an interface like this: image

Where you could add a list of post-processing shaders and manage their shader parameters and decide when they were applied in the stack.

Possibly another shader_type would be added that had specific hooks for using the same buffers as the rest of the stack, if that is not possible with canvas_item.

If this enhancement will not be used often, can it be worked around with a few lines of script?

This could not easily be added with a few lines of code and would be used often.

Is there a reason why this should be core and not an add-on in the asset library?

This would have to be done in core.

Calinou commented 3 years ago

reduz suggested adding a FullScreenQuad node which would consist in a single triangle covering the whole screen. This would make post-processing effects easier to add while making them still easy to distribute on the asset library.

(A single triangle covering the whole screen will be minutely faster than two triangles, at least on desktop platforms.)

Zireael07 commented 3 years ago

A fullscreenquad node that automatically covers the screen would be excellent - with my custom post-process motion blur, I sometimes see the effect 'lagging' behind ...

clayjohn commented 3 years ago

reduz suggested adding a FullScreenQuad node which would consist in a single triangle covering the whole screen. This would make post-processing effects easier to add while making them still easy to distribute on the asset library.

Note: The FullScreenQuad idea would be sorted into the alpha pass of the regular render pass and still use a spatial material internally. Users would be responsible for setting the material to transparent and unshaded. Importantly, this runs before tonemapping and built in post processing. So it would be rather limited. It is essentially a built-in way of doing the post processing method described in the docs.

The FullScreenQuad method would, however, be very simple to add and wouldn't require any changes to the renderer (i.e. it can just be added as a node).

We also discussed something like this proposal earlier. i.e. a custom shader_type post-process that is inserted into the built-in post processing shader. The downside of this is that it is rather complex and not really flexible for when you want to, for example, chain multiple effects together.

Overall, I am unhappy with either approach. I think in the long run, we need to redesign how we handle post-processing to better support making custom post-processing effects.

Ansraer commented 3 years ago

mux and clayjohn had a short discussion about this on the #rendering rocketchat channel today. They proposed a completely different approach to this: using a post processing graph: image (quick mockup I created)

Both built-in effects and user-created ones would be available as nodes and could be chained together in any order.

Zireael07 commented 2 years ago

Tangent: I was sure I had mentioned it, but currently post-process effects affect everything (e.g. gizmos) - we need a way to exclude some nodes/visual layers from them.

Calinou commented 2 years ago

Tangent: I was sure I had mentioned it, but currently post-process effects affect everything (e.g. gizmos) - we need a way to exclude some nodes/visual layers from them.

This is being tracked in https://github.com/godotengine/godot-proposals/issues/2138.

wareya commented 2 years ago

I hope that whatever system is decided on here doesn't make it unnecessarily complex or have unnecessary drawbacks.

Background behind my opinion follows. Skip to the bottom to see what I actually have to say.

One of my projects involves being able to load maps from Quake-like games, but also involves HDR lighting.

Going ingame with the default tonemapping results in crushed, oversaturated colors, because the default tonemapping is just a clipping function (blue light added to help demonstrate the limitations of pure clipping):

https://user-images.githubusercontent.com/585488/168461301-9d29c9d6-f931-47bc-83d2-c6b734cbace8.jpg

Using ACES Fitted avoids this, but changes the overall lighting balance of the scene, because ACES fitted does a lot more than just desaturate very bright colors when they clip. This is undesirable, because this project is using maps from pre-existing games, and their lighting is no longer being replicated remotely faithfully:

https://user-images.githubusercontent.com/585488/168461315-277497dc-2d2b-447a-b8bf-0c8de4e42679.jpg

I wrote a custom tonemapping shader that "just" desaturates high-energy colors and it looks fine:

https://user-images.githubusercontent.com/585488/168461347-2427e279-adcf-49e0-ab2f-45a250f06aa7.jpg

(Side note: custom tonemapping curves would not help me here, only custom tonemapping cubes or custom tonemapping shaders.)

Custom tonemapping shaders are not (yet?) supported, and the workaround necessary to make it work, using a viewport and a viewport container or viewport texture, has a lot of drawbacks (involving Controls, window scaling modes, etc). It also means that the lighting cannot be previewed in-editor accurately, because the custom tonemapping shader isn't part of the 'environment'.

Salient point: if the system decided on here causes problems for Control nodes, or window viewport scaling, or has any of the other downsides that the viewport-based workaround has, it may end up being disused. Custom post-processing should be basically transparent to the rest of the development experience, including w/r/t interaction with other, unrelated features (like canvaslayers, window viewport scaling, etc). Approaches that are not basically transparent once they're set up should be scrutinized heavily to see if their tradeoffs are actually necessary. As such, I'm skeptical of the FullScreenQuad approach.

Calinou commented 2 years ago

Using ACES Fitted avoids this, but changes the overall lighting balance of the scene, because ACES fitted does a lot more than just desaturate very bright colors when they clip. This is undesirable, because this project is using maps from pre-existing games, and their lighting is no longer being replicated remotely faithfully:

As an aside, remember that id Tech 3 lightmaps are designed to be displayed with some kind of overbright management. This was done to compensate the lack of HDR lightmaps, since lightmaps were stored in a LDR format for performance reasons. This is controlled by the r_mapOverBrightBits cvar, which defaults to 2.[^1]

There's also the r_intensity cvar which multiplies the brightness of all textures (including non-world textures, so it can have undesired effects on the HUD). I think r_intensity also multiplies the brightness of the lightmap itself, but I haven't verified this.

I've found that most id Tech 3 maps look subjectively better if you reduce r_mapOverBrightBits to 1 and increase r_intensity 1.5. It gets rid of the notoriously "dull" look of some maps, especially in Enemy Territory.

That said, when using ACES tonemapping, this kind of tweak is probably counterproductive. It's worth keeping in mind if you intend on using linear tonemapping still (e.g. because of technical limitations or to maximize performance).

On top of that, you may also want to add some constant ambient lighting to the whole scene. id Tech 3 doesn't have a built-in cvar for this, but I've found that it can help with areas of maps that are too dark (which occurs more often with the aforementioned tweaks). DarkPlaces has an r_ambient cvar that adds to every texel of the lightmap (this is different from Godot's implementation, which max()es every texel with the ambient light instead). It will brighten the entire scene a bit, but it often looks subjectively better.

You can probably simulate the above tricks in Godot by manipulating the lightmap data with the Image class before loading it. If performance is an issue, you can cache the lightmap data to disk once it's been manipulated.

Vanilla Tweaked Tweaked + Simulated ambient[^2]
vanilla tweaked tweaked_plus_simulated_ambient

[^1]: There is also r_overBrightBits, but I personally haven't played much with it. [^2]: I added constant brightness using GIMP. It'd look less dull if the actual lightmap data was modified.

wareya commented 2 years ago

Definitely well-aware of the overbrightbits stuff. The main issue with ACES Fitted for me here, pushing me to post-processing tonemapping, is how it crushes dark areas in ways that the original engine does not (and as such, that the original map designers did not account for). If there was a way to use ACES Fitted without the weird stuff happening in the dark end of the tonemapping curve/curves, I might be able to use it, and not have to rely on post-processing.

(rest of this post is a tangent that doesn't really have anything to do with this proposal, feel free to ignore)

(Importantly, in quake 3, when using the mapoverbrightbits stuff on an oldschool computer setup with the original opengl1 renderer, you were expected to have a brighter-than-normal monitor, making up the lost brightness. It also worked differently in windowed mode than in fullscreen mode, i.e. not at all. So when using ioquake3's opengl2 renderer, or anything similar, where the design constraints are different, the exact settings you use are a bit touchy. See also: https://github.com/ioquake/ioq3/issues/178. This is what I get with my current ioquake3 opengl2 settings, which doesn't lose out on overall brightness like your tweaked screenshot does: https://user-images.githubusercontent.com/585488/168485094-339e88d4-cd99-4ac2-a1cd-17b37be5c3b6.png)

(As a side note, my earlier in-godot screenshots are using only the LDR part of the lightmaps, because the model conversion process I'm currently testing with clips off the HDR part of the lightmaps, with the sky-lit areas being lit up by a directionallight instead. Room for improvement, I might have to build tools for dumping HDR lightmaps manually if I can't find them, but unrelated to the proposal this thread is about.)

DarkPlaces has an r_ambient cvar that adds to every texel of the lightmap (this is different from Godot's implementation, which max()es every texel with the ambient light instead).

I'm, uh, actually doing witchcraft and loading the lightmaps as gray AO and then attempting to reintroduce the color with a second material pass in multiply blend mode. Still working on making it accurate (and might not be able to make it 100% accurate), but it means I don't (yet) have to interact with godot's lightmap system (which seems largely based around in-engine baking and I haven't figured out how to shove pre-existing lightmaps into it yet). (colored AO when?) (I know colored AO is super nonstandard but it would simplify importing models that have already had full, colored light simulation done to them)

clayjohn commented 2 years ago

The below is a sketch of some rough ideas that will have to be developed further

Reduz and I discussed post processing again today at the Godot sprint. We agreed on the following:

  1. To support this, we need to expose more resources used by the renderer to script (e.g. users need access to the backbuffer, depth buffer etc. from script)
  2. We need to implement hooks in the renderer where render passes can be inserted (this doesn't necessarily need to be constrained to post-processing)
  3. Ideally, an implementation would look something like a RenderingProcess resource that can be added to the Environment, the RenderingProcess resource would expose a script that issues some rendering commands using the rendering device. This is a alternative to using a visual graph (render passes are essentially a form of graph)
Calinou commented 2 years ago

Using ACES Fitted avoids this, but changes the overall lighting balance of the scene, because ACES fitted does a lot more than just desaturate very bright colors when they clip.

Not related to this proposal, but this makes me wonder if we could add shadows/midtones/highlights adjustments to Environment (as part of the adjustments checkbox). This would allow for more gradual adjustments of brightness compared to just adjusting the entire scene's brightness. For instance, to counteract ACES' overall darkening of the scene, you could set Shadows to 1.4, Midtones to 1.2 and Highlights to 1.0 (all values default to 1.0).

I've seen other engines that support this out of the box, but I don't know how expensive this kind of filter is.

WrobotGames commented 2 years ago

Should this 'post process shader' be part of the environment resource or the camera effects resource? Or is the camera effects resource reserved for 'true' camera effects. (Exposure, DOF, motion blur, filmgrain, vignette). (Shouldn't glow and adjustments be part of this resource then?) Kinda vague

clayjohn commented 2 years ago

Should this 'post process shader' be part of the environment resource or the camera effects resource? Or is the camera effects resource reserved for 'true' camera effects. (Exposure, DOF, motion blur, filmgrain, vignette). (Shouldn't glow and adjustments be part of this resource then?) Kinda vague

Right now the CameraEffects resource is more reserved for "true" camera effects. But it doesn't necessarily have to remain that way. In my opinion, the custom post processing should be implemented in Environment first, then an override can be added to CameraEffects if there is justification/demand.

h0lley commented 1 year ago

in terms of usability, how about resources for each effect that are plugged into an array held by WorldEnvironment. array items can now be nicely ordered via drag and drop in the inspector. it could be an inheritance tree like Resource > PostProcessingEffect > FogEffect and we add our own by extending PostProcessingEffect

that could also make for a better split between 2D and 3D, related: #4564

darthLeviN commented 1 year ago

The workflow isn't the only thing that needs to be changed. The current system has overhead. It renders everything into a separate texture and then stacks it on the main one instead of directly rendering into an existing one.

There has to be a way to tell a viewport to reuse an existing viewport buffer. I suggest the changes below :

1- A new object type called ViewportHook to be made that connects to a already existing view port (with the option to connect to the main one) that has a option to either clear the previous depth buffer, take a snapshot of it (and clear or not clear after that) or not touch it at all.

2- Create a new material/shader type called "viewport shader" that has DEPTH and ALBEDO inouts, maybe a SKY out? not sure what can be added here. Some matrix built-ins are needed for sure. This shader is then plugged into a Viewport or ViewportHook. next pass should be allowed to stack shaders.

3- add new material type ViewportMaterial : it's basically moves all the environment management to a material that's plugged in the viewport.

4- In project settings one should be allowed to select a viewport material for the default viewport.

Benefits of this setup is : 1- Allows the most advanced users to avoid a lot of overhead 2- Allows the existing visual shader editor to give a visual perspective on the rendering workflow if needed. 3- Adds to the current system instead of changing it and is Backwards compatible.

this is the best i could come up with. idk if it has any flaws.

Just wanted to add that i don't think there can be a great workflow solution for post processing in godot 4. But rather something thats 'OK'. The backbone for the workflow upgrade should be put in before there is any workflow upgrade.

VantaGhost commented 1 year ago

I have been trying to implement various post processing effects in Godot 4 and have had a rather frustrating experience, particularly with more complex effects but even with simpler ones. I definitely think Godot needs a dedicated way to place post processing effects into the existing post processing stack.

Both the ways that the Godot documentation suggests doing post processing come with crippling limitations:

An additional feature that would benefit a new post-processing system greatly, would be the ability to write to the buffers, just like the built in processes are able to (I imagine, though I might be wrong).

These features would massively extend the capabilities of the renderer without needing to modify engine code, or over-complicate things for those why just want to use the built in effects.

OK, thanks for reading my rant. I'm really enjoying Godot so far, just finding a few features lacking, particularly in the technical art department.

MegadronA03 commented 12 months ago

for post processing I think its better to add 1 node:

and 1 post processing shader resource (if there are no plans on modifying the rendering pipeline):

This node will "add" another next pass shader to each child nodes (or to specific nodes with its parents (might be specified in the node itself) (subject to change/review)), so it should be much more flexible in terms of applying different effects for different things on the screen. That also includes node that specializes in background rendering (currently its nodes that are uses Environment. just saying this because there quite a lot of features that should not be managed there)

That might probably allow to modify node rendering properities, and add like transparency, or cutting rendered mesh using masks or discard shader. That also solves the problem with transparency, because post processing literally modify the end material of the object, instead of actually doing 1 more layer of overdraw.

and also im looking forward to change the way of how skyboxes shaders are working so we could apply skyboxes textures to other meshes/things (even to post processing) for fake skyboxes like in source games.

Edit: after looking into my suggestions, i think its better to approach shader typing like godot does it with its objects (like we have some Main empty shader class, that have Sky, CanvasItem/2D, 3D, Prep, Post childern that are extending from it with their own uniform hints and also could be converted/extended into your custom shader class for editing, just like objects in godot)

the base hierarcy that I thought of:

Each shader will have "next pass" and "prep pass" that allow to construct desired shader using other shaders. this could be more flexible in terms of adding obscure functionality, like drawing 2D elements directly In 3D or the other way around (3D on 3D) without limitation of the SubViewport node.

The current system not only makes rendering process much more transparent to the end user, but also will makes "next pass" much more usefull than curently it is, and also solves the problem with post and pre process shaders, and the problems they are causing with transparency.

WorldEnvironment should be changed to WorldLight or LightSettings (because suggested implementation literally deconstructed Environment resource into multiple vertex shaders), that controls light settings within the engine.

P.S. Im aware thats literally changes the entire underlying rendering system, so Its probably gonna be released in like Godot 5.

bertodelrio256 commented 5 months ago

the biggest issue with the fullscreen quad is that all transparents will not be rendered. that is a major showstopper as water/oceans are not seen. Also, using a quad that is in the scene is not actual post processing.

Zireael07 commented 5 months ago

@bertodelrio256 There is a workaround for that - dithered transparency

bertodelrio256 commented 5 months ago

@bertodelrio256 There is a workaround for that - dithered transparency

yeah that wouldn't work on a water shader. for one, it would look super noisy, and two, the water shader must be transparent because it is reading from depth. my current workaround is to just set the ALPHA of the fullscreen quad to 0.6. that way i can still see transparents and get a decent mix of the post effect. and when im not using any post effects i set the ALPHA to 0.0;