Scirra / Construct-feature-requests

A place to submit feature requests and suggestions for Construct.
https://www.construct.net
11 stars 1 forks source link

Effects SDK: down and up-sampling #75

Open F3der1co opened 9 months ago

F3der1co commented 9 months ago

Reviewed guidelines

Checked for duplicate suggestions

Summary

In shaders a common performance technique is to downsample the texture, run the shader on the downsample and upsample again. This creates large performance gains as there are just way less fragments to run the shader on, especially for shaders that do multiple samples, like any shader that uses blur (also as the down and upsampling introduces some blurring itself) or some kind of averaging. (for example, blur, bokeh, bloom, glow, kuwahara, SSAO, soft shadows, some outline shaders etc)

Possible workarounds or alternatives

Maybe you can do some really weird effect where you render to a corner of the texture with an effect and then do some kind of down/up-sampling with some convoluted effect stack that skips the other pixels instantly. But that would be more of a hack and not an alternative.

Proposed solution

I would love if we could define that an effect should downsample in a predraw and how much to downsample after the effect was drawn it can upsample again.

(Ideally we would get even more control, so we can do a chain on the downsample. i.e. a two pass blur without always sampling back up. But this might be difficult to implement with the way the shader stacking works in c3, maybe if one effect could actually do multiple passes instead of always requiring to stack them in the editor. But as that is a bit unclear how to implement that with the effect stacking system c3 uses, I keep it at the first suggestion) Maybe even having a frame buffer we can write to, but these get more and more complex so eh.

Why is this idea important?

Performance of effects.

Additional remarks

No response

AshleyScirra commented 9 months ago

It would probably be easier to implement this if you could just set the entire effect chain to be rendered at a lower resolution. E.g. if you set the entire object to downsample to 50%, it starts by pre-drawing the object, shrinking the surface to 50% of the size, and then rendering the whole effect chain at that size, and then stretching the result to fill the original size of the object. Do you think that would do the job?

F3der1co commented 9 months ago

It would probably be easier to implement this if you could just set the entire effect chain to be rendered at a lower resolution. E.g. if you set the entire object to downsample to 50%, it starts by pre-drawing the object, shrinking the surface to 50% of the size, and then rendering the whole effect chain at that size, and then stretching the result to fill the original size of the object. Do you think that would do the job?

That does sound interesting! So if I understand it correctly this would be something defined inside c3 (so in the effects property of object, layers and layouts) not inside the effects addon.json? This might make it even more flexible than my initial suggestion.

edit: An issue I see is how to composite some effects back in if the whole stack is at a lower resolution, for example with a full screen post processing bloom effect: effect 1 : sample background and extract pixel above a luminosity threshold effect 2 : blur these pixels (probably multiple passes) effect 3 : additive blend the blurred pixels with the background texture

effect 1 and 2 at half or quarter resolution would be fine, but the blending shouldn't be otherwise we would have the whole final image getting some blur due to the down-sample step, so the up-sampling would need to be done before the blending. (sample problem with doing any effect that would be composited/blended back in like ssao)

F3der1co commented 9 months ago

An interesting though that came up is that this might also be a way to allow mixing high and low resolution i.e. a pixel art game and high resolution ui for accessibility in fonts etc. But for that to work there would also need to be a way to define the sampling mode for the down- and up-sample. This way we could use it to render the pixel art layer at a lower resolution (i.e. 25%) with nearest and the ui layers at full resolution with bilinear.

NickR-Git commented 9 months ago

It would probably be easier to implement this if you could just set the entire effect chain to be rendered at a lower resolution. E.g. if you set the entire object to downsample to 50%, it starts by pre-drawing the object, shrinking the surface to 50% of the size, and then rendering the whole effect chain at that size, and then stretching the result to fill the original size of the object. Do you think that would do the job?

Layer (or entire layout) based resolution scaling would be significantly more useful than object-based, as the issue with fill rate/performance on effects is more closely tied to overall game resolution than object size.