Open aevyrie opened 4 days ago
Off the cuff, my initial thought is that performance might be one of the major constraints here. The naive solution is to hit test everything, and let the plugin sort/filter the raw stream, and allow users to query that raw stream. This was one of the original goals of the current design! However, I found that in some cases, this is either too slow (raycasting) or impossible (shader picking). In practice, the raycasting picking backends use this knowledge to early exit and avoid reporting hits when they aren't needed for picking.
There might be a solution that builds on top of pointer inputs, like picking backends, but allows passing in queries to run when the "general raycast backend" shoots rays from the pointer. This could be a standard interface that applicable picking backends could opt into. For example, the raycasting backends might do an early exit picking raycast for perf reasons, in addition to other raycasts as described by requests to the "general pointer raycasting plugin".
PointerQuery
extension for backendsPointerQuery
resource that you can add custom queries to, that is run for all pointers each frameThis would avoid many performance issues, because you only need a single ray traversal, while executing all queries for only as long as those queries are interested in reading results.
This solution would also allow any 3rd party to add support. For example, this would be be a simple addition on top of the existing mesh picking backends, and should be just as easy to add for the downstream rapier
and avian
physics raycasters.
Having a "retained" interface like this makes a lot of sense to me, because the raycast itself happens at a fixed point in the schedule, in order to batch all queries for the same pointer input that frame. This is as opposed to being able to immediately request and evaluate these queries at any time in the schedule.
RayMap
to do pointer raycasts with your engine of choice?https://github.com/bevyengine/bevy/issues/15287 could be used to help improve the ergonomics of picking observers, by making it very easy to filter for entities that they trigger on.
Following the discussion started here: https://github.com/bevyengine/bevy/issues/16065#issuecomment-2438900604
@cart @NthTensor I've rephrased the discussion in my own words, as an attempt to better understand it. Please correct me if I've misrepresented something!
Context and Current State
bevy_picking
has an opinion of what "picking" means. Specifically, it means coalescing all hit tests across all entities into a view of what is directly below each pointer, plus the ability for entities to allow hits to pass through to lower levels when determining which entities are hovered. Generally speaking, only the topmost entity under a pointer is considered "hovered", regardless of how many entity hits may have been reported by all backends for this pointer - unless that entity allows picks to pass through to lower entities.This definition broadly follows the way UIs reason about picking, but applies it to all entities (UI, 3d, 2d, etc). While this covers most use cases, especially for 2D/UI, it lacks the expressiveness of something like a general purpose raycast.
The primary limitations lie in the logic used to decide:
Currently, this is done solely with the optional
PickingBehavior
component. The component provides two axes of control:should_block_lower
: does this entity block things below it from being hovered?is_hoverable
: is this entity itself hoverable; i.e will it emit and trigger events when hovered?This behavior is only decidable by the entities themselves, not, say, based on the state of the application.
mod_picking
ArchitectureA brief review of the architecture of the picking plugin, for reference during discussion:
PickingBehavior
component for performance reasons. (Existing raycasting backends do this).Camera::order
. This ensures that hits are correctly ordered to match the order that entities are rendered on screen. This data is used to build:OverMap
: maps pointers to all entities that they hit (as reported by backends), sorted in order.HoverMap
: filters theOverMap
by traversing the hit entities top-to-bottom, and following the blocking/hovering logic defined in each entity'sPickingBehavior
component, halting as soon as a blocking entity is hit.HoverMap
is copied to thePreviousHoverMap
. The event system then looks at these two maps as the authoritative picking state to determine what events to send. If an entity was hovered in the previous frame, but is absent this frame, we know to send aPointer<Out>
event.The general consensus is that this model and definition of picking is sufficient, and should be retained. However, it would be nice if we could reuse some of these abstractions, while extending their functionality to enable more expressive queries for what entities are under each pointer, and what events to trigger.
Performance
One of the constraints with making these queries more expressive is performance. The current raycasting backend, for example, examines the
PickingBehavior
component of entities as it hits them front-to-back as a significant performance optimization. If we were to naively intersect all entities, and query them later, this make raycasts exponentially more expensive.Improvement Discussion
The point of discussion is then: how can we modify these existing tools to allow for more complex queries and interaction events, in addition to the global coalesced state defined in the
HoverMap
?Quoting @cart: