armory3d / armory

3D Engine with Blender Integration
https://armory3d.org
zlib License
3.05k stars 317 forks source link

Ability of rendering from multiple cameras at once and composit/layer it into a single final image - A simple solution to 3D UI #2257

Open ItsCubeTime opened 3 years ago

ItsCubeTime commented 3 years ago

An example of how this kind of solution is used in another game engine: https://www.unrealengine.com/marketplace/en-US/product/lgui-lex-gui-3d-ui-system-for-ue4?sessionInvalidated=true

The idea is that you have 2 cameras, not necessarily in the same scene rendering at the same time to composite into the final image that will be renderer on the screen. This way you could have a fixed FOV for the camera shooting the scene holding the UI elements and an adjustable FOV for the camera shooting the game level.

If one could create something similar to UE4s node based post processing/compositing: https://www.youtube.com/watch?v=71FjgIJ4vck by adding a "camera input" node, a "camera UV input" and allowing materials to act as a layer between that the camera renders the scene & the finished frames being outputted to the screen the possible use cases for a feature like this could also expand from just overlaying something simple like a UI to:

To summerize:

QuantumCoderQC commented 3 years ago

I think its already possible through Haxe atm:

https://github.com/armory3d/armory_examples/tree/master/render_to_texture

ItsCubeTime commented 3 years ago

I think its already possible through Haxe atm:

https://github.com/armory3d/armory_examples/tree/master/render_to_texture

I just went trying it out, its pretty cool!

https://user-images.githubusercontent.com/20190653/124780931-477b5500-df43-11eb-9df3-29a4960fba99.mp4

I guess what I would like to see then to make this more user friendly is if we had a way of using materials to render each frame.

So that it would be easy of setting up something like:

  1. Image is rendered from the camera (or from multiple cameras)
  2. The data is sent to a material
  3. The material is used to display to the screen

Idk what you guys will think about this idea, but what if we had an alternative to using iron.Scene.active.camera or "Set Camera active" in logic nodes thats something like: iron.Scene.RenderMaterial (or even better imo would be if one could set this on a per player basis to lay out some groundwork for multiplayer. Although thats just my genuine thought).

And that we had a special node in the material editor that lets one select a camera. With UV inputs and RGB outputs for a pre-composited final image (as well as prefferably also individual outputs for each render pass).

ItsCubeTime commented 3 years ago

@MoritzBrueckner sorry for pinging you but, I would really love your thought on this as well <3

MoritzBrueckner commented 3 years ago

There is also a 3D UI example project here: https://github.com/armory3d/armory_examples/tree/master/ui_script3d.

In general I think the approach with custom render targets is flexible enough and enables you to do a lot of different things. But I agree that accessing the camera output (render targets in general) should be made easier, especially for combining the output with shader nodes. I think this is related to what is discussed in https://github.com/armory3d/armory/pull/1817 and unfortunately there seems to be no good solution right now. It's a bit more difficult than it might seem at first because when the image is requested the scene rendering hasn't completed yet, so you have to decide between using the output from the last frame or changing the render order a bit.

Regarding the iron.Scene.RenderMaterial: could you elaborate a bit how that would work? How would it be used?