Open Ralith opened 3 years ago
Will keeping an acceleration data structure like a mapping from type ID to matching archetypes around and up-to-date really make sense when the above cost can be avoided using prepared queries?
Definitely questionable! The value of making query set-up faster isn't zero, but it's certainly a very low priority at this point, and a complex solution would be difficult to justify.
a complex solution would be difficult to justify.
What I am mostly concerned about is that this might add overhead to component insertion/removal to speed up query set-up that even users of prepared queries who do not gain anything would have to pay. (The ordered-list-seems-faster-than-hash-table thing within the archetypes also seems to suggest than typical usage is dominated by the constants instead of the asymptotics.)
Average-case component insertion/removal would bypass any acceleration structure updates since target archetype indexes are cached in ArchetypeSet::insert_edges
and Archetype::remove_edges
.
One potential approach is to store a sparse-set of associated Archetype
s per component.
Then finding matching Archetype
s for a query becomes the following steps:
Archetype
sArchetype
s of that componentArchetype
s in the query check its sparse-set to see if it contains that Archetype
This should be pretty quick.
Downsides:
Archetype
is createdArchetype
. More memory usage. That sounds like a plausible approach. Memory/archetype setup CPU cost should be insignificant. To justify the maintenance/complexity cost, I'd want to see a compelling and realistic benchmark, ideally something from a real game.
This problem is closely related to the archetype graph we use to optimize component insertion/removal: the set of archetypes traversed by a query that accesses a certain set of components is equal to the set of archetypes reachable by adding components to an entity that initially has exactly those components.
The existing graph can't be applied directly because it's lazily populated, but perhaps it could be extended to include virtual archetypes for every possible set of components? Naively this would lead to quadratic memory usage, so probably worse than a sparse set per components as proposed above.
Currently, each query checks compatibility with each known archetype. This works pretty well in practice because the number of archetypes doesn't tend to be astronomical and checking is fast, but there should be an asymptotically better solution.