Open phisn opened 2 months ago
Hi! I used a Vec
since I heavily iterate over the clients inside replication system. So HashMap will affect the replication performance.
But I also see your problem. I need to think about it. Maybe you could iterate over clients first and cache the necessary data on your side?
bevy_replicon_attributes might be a good solution to your use-case. You'd probably want to do perf tests comparing it with a raw approach. And also perf test both with few and many clients.
How to implement:
Sector
component that contains an id for the section of map it current belongs to. You could also do Sectors
if your entity can occupy multiple sections.Sector
changes, update its visibility condition: vis![Sector(current_sector)]
. This will trigger the backend to update its visibility for all clients (those who lost and those who gained visibility).Sector
visibility attributes based on which sectors it can see that changed from the previous tick (ClientAttributes::add
and ClientAttributes::remove
).Since each player and entity will change sectors at most a few times per second, you avoid the worst-case of updating visibility of everything every tick. You also decouple entities from clients, allowing more complex visibility calculations for players (e.g. what if a big rock occludes entities?).
bevy_replicon_attributes might be a good solution to your use-case. You'd probably want to do perf tests comparing it with a raw approach. And also perf test both with few and many clients.
What about players moving frequently between sector borders. In my indented approach I would make entities inside a 3x3 range visible and entities outside a 5x5 range invisible. This is possible using your approach when the player keeps ownership over all sectors in a 3x3 range (keep in mind that all these 9 sectors need to lookout for a 3x3 range). Since I consider this a hotpath it would be good to have more control over the implementation.
Hi! I used a Vec since I heavily iterate over the clients inside replication system. So HashMap will affect the replication performance. But I also see your problem. I need to think about it. Maybe you could iterate over clients first and cache the necessary data on your side?
I think O(1) access is very useful since getting entities by id is also used inside the replicon. I would suggest one of the following approaches.
client id -> index in Vec
. We assume that connecting and disconnecting is relatively rare. Adding is trivial. Removal already has enough overhead making an update to the HashMap negligible.I think the least intrusive and easiest to implement is probably 4. We keep the data structure in tact and just change only the methods add, remove & get. What do you think?
What about players moving frequently between sector borders.
It's hard to know unless you test it. The various options are easy to implement, I recommend testing to find what works best. You should also test with a fork of bevy_replicon
that provides O(1) lookup for clients to see if it makes a difference.
We can store a HashMap alongside the Vec in ReplicatedClients mapping client id -> index in Vec. We assume that connecting and disconnecting is relatively rare. Adding is trivial. Removal already has enough overhead making an update to the HashMap negligible.
I would go with this one. It's as simple as 4, but more performant for lookups and doesn't require a separate crate.
@phisn could you submit a PR?
Will implement it in the next month.
Is there any reason why a vector instead of a hash map is used here?
https://github.com/projectharmonia/bevy_replicon/blob/c740527a268a3a902db5852f2cefa879d5cb4cc5/src/core/replicated_clients.rs#L21-L25
I have the following use-case: Entities & Players have a transform and move around. When entities get near players they should become visible. Given I have a lot of entities moving around, it seems more realistic that the entity notifies the player if it gets near enough. Therefore I would like to lookup ClientVisibility efficiently.
The required function is
change_visibility(entity, client_id, state);
. We can implement it the following:replicated_clients.client(client_id).visibility_mut().set_visibility(entity, state)
.replicated_client.visibility_mut().set_visibility(entity, state)
when iterating over all clients usingreplicated_clients.iter_mut()
. Since the number of roaming entities is large it is not visible to do this without caching proximity changes during iterating the entities. We could use a HashMap<ClientId, Vec<(Entity, bool)>> which seems unnecessary difficult considering the underlying structure of ReplicatedClients.I suggest making
Clients
aHashMap
to an enable a broader range of use-cases. I am open do fix this myself if there is no real reason against this suggested approach.