Open alice-i-cecile opened 2 years ago
It seems you want to take ui elements out of the global layout, but still maintain layout between them. The easiest way to do that would probably be to render this world space ui to a texture with normal layout, then render that texture in the world space coordinates you want
Rendering to a texture is a sensible choice that I should have mentioned. The challenge with that approach though is ensuring that interactivity still works. Can buttons be clicked, can knobs be twiddled? What happens if we end up covering a world-space UI rendered to a texture with an ordinary UI element?
I do like it though, as it's likely to be fast, reliable and avoids the need for a synchronization system.
(cc @aevyrie for implications on your picking designs).
it could work to capture input events targetting the texture, translate the coordinates from where they reach that texture to the inner UI view and compute interaction from there š
I've actually had diegetic UI in mind during the picking rewrite. :slightly_smiling_face:
The underlying pointer abstraction works on RenderTarget
s, and the final picking sort collapses all picking hits into a unified depth test per pointer, so there is no reason this couldn't "just work".
The only thing that would need to be added is an intermediate system that can raycast against textures to get the input location on the texture, though with shader picking, this should theoretically work without needing any intermediate shims.
Excellent, I'm glad to hear you're way ahead of us. Looking forward to seeing that upstreamed!
I started work on supporting viewport and render target UIs in #5892 by letting users specify a camera entity for UI root nodes. Now that I'm reading this issue, I wonder if making it a little more flexible than just setting a camera could help with the problem that's explained in here.
I think it could already be sort of useful, but at the moment you would need to create a 2d camera with render_target: RenderTarget::Image(...)
for each texture that you want to render some UI onto. Plus you'd probably have to handle the UI interaction manually.
Maybe something like this could be used?
/// Optionally sets the camera entity for a UI root.
pub struct UiRootCamera(pub Entity);
/// Optionally sets a transform for the UI Root, giving it a position in the world.
pub struct UiRootTransform(pub GlobalTransform);
Or maybe a UiRoot that's a child of a non-ui entity with GlobalTransform should just assume that it should be rendered somewhere in the world relative to that parent's position. Not sure how complicated that would be, since we'd have to manage the texture, transform, interactions, internally and whatnot, but it sounds to me like that kind of solution could make it super simple to use for end-users.
I haven't thought too much about it, so let me know if this sounds interesting or insane.
Or maybe a UiRoot that's a child of a non-ui entity with GlobalTransform should just assume that it should be rendered somewhere in the world relative to that parent's position. Not sure how complicated that would be, since we'd have to manage the texture, transform, interactions, internally and whatnot, but it sounds to me like that kind of solution could make it super simple to use for end-users.
This would be my preferred solution, from an end-user perspective. Setting your UI nodes as children of e.g. units should Just Work, by setting them to be rendered in world coordinates.
Setting your UI nodes as children of e.g. units should Just Work, by setting them to be rendered in world coordinates.
Indeed, that sounds very friendly and, now that I think of it, much simpler than having an extra component that you'd have to update manually with global transform info.
One thing to validate is whether it's fine that the UI tree under that root will become relative to your unit entity. Do you even need to render to a texture if all of that works out-of-the-box? I think even the the current interaction code may mostly work out of the box since it relies on the global transform?
EDIT: Now that I think of it, having a UI that lives inside the world comes with a few use-cases that sound very cool but are pretty different from each other, especially if you think of a split-screen multiplayer game.
Right, 3 is "billboarding, but with access to UI layouts (#3688). Can you explain the distinct between 1 and 2 a bit more?
Right, 3 is "billboarding, but with access to UI layouts (#3688). Can you explain the distinct between 1 and 2 a bit more?
They're pretty similar, but if this was something that the engine handles for you then I suppose you'd need some way to express that. Maybe I'm over-thinking it š¤·āāļø
This is a very useful feature, what's the plan for it? Is there any third party plugin we can use temporarily?
Here's an example from an old version of bevy, if you just want to render UI at a world space location projected into screenspace: https://github.com/aevyrie/bevy_world_to_screenspace.
The comments in this PR may also be helpful: https://github.com/bevyengine/bevy/pull/1258
Thanks for help. Like you said, currently I use world_to_viewport
to convert world space location to screen space location, and sync them every frame.
I had a thought today based on @staff_eng's work, who is effectively reimplementing a large amount of UI in 2d space. It's probably been discussed before. Firstly, UI nodes shouldn't be so special in terms of how they are rendered (perhaps an entire UI tree could be treated as something that can be rendered), and secondly, UI tree roots should be parented by another non-UI entity, whether that is a window, or whether that is an entity in 2d (perhaps even 3d) space. Then you could have entire UI trees (e.g. interactive menus) positioned in world space. I think that can be hacked today, to some extent, by rendering to an image and then rendering that image in world space. If a UI tree is a child of a window, it might even help us solve the window-independent scaling thing.
In that world, the special-cased Text2dBundle could also be removed.
big yes on this. It's a major issue for the UI to be special. Consider crates for graphical effects such as outlines for sprites or particles. You could attach the components to UI elements and it would just work. The lack of sprite sheets in UI also has been a major issue for users.
I think the only motivator for the split is that you can render UI on top of sprites, independently. But I think RenderLayers is better suited for that now that it exists.
I also added some of my musing into the discord discussion (@logicprojects on Discord) but I think an important question is if we want objects in the world to obstruct world space UI. I think there's a mix of behavior in existing games but I haven't done a study of it.
Currently UI rendering happens after the main pass and post processing so it would always appear in front of all world objects (unless the depth buffer gets more use, I think it's currently an optional pass and honestly this might be the correct solution).
Rendering to a texture would have the opposite effect where UI would always be occluded by other world objects (as the texture would just be a sprite in the world).
if we want objects in the world to obstruct world space UI
You want both behaviors. bevy_mod_billboard
for example exposes this behavior as depth culling and is useful prior art.
Depth culling enabled (obstructed):
Depth culling disabled (always on top):
Some (alpha) prior art: https://github.com/mvlabat/bevy_egui/pull/214
I forked bevy_egui
to support rendering to a bevy Image
. Then that can be used as the basis for different materials - the image below is using a StandardMaterial
on a cube.
I have not attempted to solve interactions yet. In particular, I'm looking for a way to raycast in world space (important for VR) and then resolve the UV coordinates from that raycast, so that I can pass the click event along to egui
.
See https://github.com/bevyengine/bevy/discussions/15256 for a proposed partial solution. It does not address how to adjust layout relative to world coordinates, but opens a path to solving it more easily. EDIT: Nvm, I think I see a way to support billboarding.
What problem does this solve or what need does it fill?
Rendering UI elements in terms of worldspace coordinates is extremely common and is critical for things like life bars, nameplates, tooltips or interactable widgets.
However, it's not immediately obvious how to do so, and the existing workaround requires users to write their own boilerplate synchronization system.
What solution would you like?
ScreenSpaceTransform
andGlobalScreenSpaceTransform
for UI entities by default (see discussion in #4213).GlobalTransform
, set itsScreenSpaceTransform
to compensate, based on the UI camera's projection.What alternative(s) have you considered?
Transform
coordinates usingWorldToScreen
.