godotengine / godot-proposals

Godot Improvement Proposals (GIPs)
MIT License
1.12k stars 69 forks source link

Add tools to allow a `SubViewport` to act as a backbuffer #6208

Open SlugFiller opened 1 year ago

SlugFiller commented 1 year ago

Describe the project you are working on

2D game

Describe the problem or limitation you are having in your project

While CanvasGroup and Clip Children are great and long-needed tools, they are still both limited in two notable ways:

  1. They can't be cascaded. You can't double-clip, or use a CanvasGroup inside a CanvasGroup.
  2. They are tightly linked to the scene tree structure. For instance, the "output" of a CanvasGroup can only be mixed with the screen rendered so far, and only using the GPU's built in blend modes. You can't, for example, use it as input for other elements' shaders, or mix two CanvasGroups' outputs together.

A SubViewport has significantly more freedom in how it can be used. However, CanvasGroup was needed specifically because using SubViewport as a backbuffer had notable limitations. Those limitations are:

  1. A SubViewport needs to have its size and canvas transformation manually set. Making a SubViewport match the parent is a non-trivial task, especially inside the editor, where the root viewport has a custom transformation.
  2. The timing in which a SubViewport renders is mostly up to render server. This can sometimes cause a "one frame mismatch". For example, if objects inside and outside the SubViewport use a common Skeleton2D, they may sometimes render in different poses, because the skeleton moved between renders.
  3. Since each SubViewport takes a certain amount of memory, you can't really use it inside a sprite/character that may be replicated hundreds of times in the scene. By contrast, since CanvasGroup reuses the same backbuffer over and over, you can have as many of it in the scene as you want, memory-wise, at least.

Describe the feature / enhancement and how it helps to overcome the problem or limitation

I suggest adding a couple of tools.

First, an option for SubViewport to inherit the size and world transformation from its parent. It could be as simple as allowing 0x0 to count as "inherit", since it is currently an invalid setting. The important part is inheriting the transformation as well, so things like Camera2D don't need to be replicated inside the SubViewport. If this is not needed, a CanvasLayer can be used to reset the transformation, so it doesn't need to be a separate option from auto-size.

Note that this is different from using a ViewportContainer because a ViewportContainer actually has its own size which doesn't necessarily match the parent viewport, and also, it causes the SubViewport to render, which might not be desirable.

The next tool is a special type of canvas item that forces a viewport to render when said canvas item would be rendered. This means adding the feature to RenderingServer. The reason to do this using a canvas item is because canvas items have a clear rendering order that can be manipulated with the likes of canvas_item_set_draw_index. So such a functionality would run at a specific and defined timing inside the rendering of the parent viewport. A necessary limitation is that such an item can only be present inside the immediate parent viewport of the SubViewport, and not in any other viewport.

And the final and most important tool, is a special type of canvas item that forces a SubViewport's buffer to be dropped or reused. This allows having the same viewport-based effect be applied on 50 sprites using the same amount of memory as applying it for just one. Like the above, it must be present in the immediate parent viewport of the SubViewport. Trying to use the SubViewport's texture after this canvas item has been "rendered" would simply yield a 0x0 or 1x1 black/transparent texture.

Needless to say, all of this only applies in 2D, because 3D doesn't have the same sort of guarantees regarding the order in which elements are rendered.

Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams

For the first tool, it's as simple as implicitly taking size 0x0 to mean "inherit". Alternately, it can just be a single boolean value/checkbox, called "inherit parent viewport's size and world transformation". In such a case, the size can instead be used as a margin for "widening" the backbuffer, e.g. for screen-space blur.

For the second and third tool, it needs to be a new method in RenderingServer, RID viewport_action_create(RID viewport, ViewportAction action) where ViewportAction is an enum which contains VIEWPORT_ACTION_RENDER and VIEWPORT_ACTION_RELEASE. The returned RID can be used in canvas_item_set_parent, canvas_item_set_visible and canvas_item_set_draw_index, just as any other canvas item would be.

Additionally, a matching node inheriting from CanvasItem could be added. Or it could inherit from Node, since it doesn't really need any functionality from CanvasItem, other than possibly visibility toggle (to act as enable/disable, although not mandatory).

Actions should probably be limited to one render of each viewport per frame. i.e. if a viewport is rendered and then released, it should not be possible to re-render it in a later node, nor should recursive render be allowed. Any possible use-case for such should probably be done using a different method.

If this enhancement will not be used often, can it be worked around with a few lines of script?

The first tool, maybe. Although it's hard to get the world matrix of a viewport, especially in editor mode. And it would very often be "off by a frame" due to camera movements. The others, impossible, due to the need for them to run in the middle of a render.

Is there a reason why this should be core and not an add-on in the asset library?

Rendering is core.

SlugFiller commented 1 year ago

After playing around with some code, I realized this is a wrong solution. While a screen-sized viewport can be useful in 3D for screen-space effects that respect a resizable window, in 2D, this approach has notable flaws.

First off, a SubViewport doesn't inherit 2D lights. This is notable with directional lights, as any canvas items with a normal map inside the viewport would not render correctly. It's still usable for masks, but it's not a replacement for a canvas group for more complex mixing.

Second, while it's more of an editor flaw than an issue of the method, the editor does not allow directly editing objects inside a viewport.

Third, inheriting the transform of the parent canvas may not be sufficient if attempting to use this within a sprite, since you also need to inherit the transform of the sprite itself.

Finally, simply being able to capture an image using a viewport isn't useful unless you can also blit it to screen, or use it in a shader. The issue is that there's no full-screen blit drawing command (although there probably should be), and sampling in screen space is actually quite a challenge in screen-space, since texture sampling expects UVs in the 0-1 range, while elements like FRAGCOORD are in pixels, and the screen size is not made available to the shader.

A more appropriate solution would be to be able to capture the current backbuffer long-term, use it as a texture (with renderer-supported UV hint, similar to SCREEN_UV for backbuffers), and an ability to dispose of it later.

Drawing masks into a backbuffer should be fairly simple using a combination of CanvasGroup and setting the light mask to empty to prevent lighting effects corrupting the mask. It's a much cleaner solution that wrangling viewports.

So, while auto-sized viewports can be useful, the other elements are probably a step in the wrong direction, especially for my specific use case.