projectharmonia / bevy_replicon

Server-authoritative networking crate for the Bevy game engine.
https://crates.io/crates/bevy_replicon
Apache License 2.0
310 stars 30 forks source link

Value-diff compression #178

Open UkoeHB opened 8 months ago

UkoeHB commented 8 months ago

Some networked games try to compress diffs between updates by networking only the value difference at a component level. Replicon doesn't currently support that.

Here is a first-draft idea for how to support it:

If we pass the change-limit tick + current tick into serializers, then the serializer can have a local that tracks historical state and performs a diff internally. The deserializer would also need to track historical state and apply diffs to the correct older value. We might be able to provide a pre-baked value-diff compressor for types that implement certain traits.

Note that supporting this would require changes to the shared copy buffer, since different clients could get different component serializations.

Shatur commented 3 weeks ago

Current state

We now pass ticks to serializers and send different data to different clients. We just need to add a way to get an old value.

Component-based approach

We can create a component ComponentHistory. It's not generic itself, but provides get_mut::<D>()/insert(D). D may or may not be equal to the component we serialize (to let user choose how to store the value). Internally it stores a hashmap with ClientId and blob data that does type-erasure. We add ClientId and &mut ComponentHistory fields to SerializeCtx for the access.

The overhead of this solution will be an add additional lookup for ComponentHistory per-entity + iteration on all components on disconnect to cleanup. Getting history will require additional lookup for D.

I also considered making it generic to avoid dealing with type-magic, but it makes it harder to clean when a client disconnects, requires to query for ComponentHistory for each component instead of once per entity and I can't use it with SerializeCtx this way.

Resource-based approach

Exactly the same as component-based approach, but with additional internal hash map for entities, no Option (the resource will be always present) and Entity field inside SerializeCtx (to make lookups).

The iteration overhead will be near-zero (we lookup for the resource once instead of per-entity), but if a component serialization function needs the history, it's a bit slower due to the entity-based lookup. Also simpler cleanup on disconnect.

Questions

I would appreciate opinions.

UkoeHB commented 3 weeks ago

Resource-based sounds better. Leads to better memory management as entities are spawned and despawned.

Bevy exposes Resources, but doesn't provide a way to insert a new resource.

Can't we just map to ComponentId?

Shatur commented 3 weeks ago

Can't we just map to ComponentId?

Yeah, that's what I suggesting: map ComponentId into our implementation of BlobVec (I don't think we can just map it to OwningPtr).

Shatur commented 3 weeks ago

Additional notes

Instead of providing the resource from context, we provide a resource adapter that is scoped to the current entity. Just to avoid passing entity each time, will be nicer to use.

Forgot to talk about ticks. We will need to store values with ticks and return only the last acked. So instead of get_mut::<D>()/insert(D), we probably want last_acked::<D>()/add_unacked(D, tick). Internally we will cleanup acked values.

API showcase

Considering all the above, here is how the API might look like:

#[derive(Component, Serialize, Deserialize, Clone, Copy)]
struct Example {
    a: u32,
    b: u32,
    c: u32,
}

// Let's write the diff logic as methods for clarity.
impl Example {
    fn delta(&self, _other: &Self) -> ExampleDelta {
        todo!();
    }

    fn apply(&mut self, _delta: ExampleDelta) {
        todo!();
    }
}

/// Depends on how user want to compress, but let's consider the simplest example.
#[derive(Serialize, Deserialize, Clone, Copy)]
struct ExampleDelta {
    a: Option<u32>,
    b: Option<u32>,
    c: Option<u32>,
}

/// User overrides the serialization function that checks last received value using context.
fn serialize(
    ctx: &SerializeCtx,
    component: &Example,
    cursor: &mut Cursor<Vec<u8>>,
) -> bincode::Result<()> {
    if let Some(acked_component) = ctx.component_values.last_acked::<Example>() {
        let delta = component.delta(acked_component);
        DefaultOptions::new().serialize_into(cursor, &delta)?;
    } else {
        // We can call the default serialization function when there are no delta.
        default_serialize(ctx, component, cursor)?;
    }

    ctx.component_values.add_unacked(*component, ctx.server_tick);

    Ok(())
}

/// Deserialization in place called only for already received values, so we override it to use delta.
///
/// Regular deserialization function in this example will be the default one.
fn deserialize_in_place(
    _deserialize: DeserializeFn<Example>,
    _ctx: &mut WriteCtx,
    component: &mut Example,
    cursor: &mut Cursor<&[u8]>,
) -> bincode::Result<()> {
    let delta: ExampleDelta = DefaultOptions::new().deserialize_from(cursor)?;
    component.apply(delta);
    Ok(())
}

Built-in trait

We can provide a trait to automate registration. But user will have to write it manually when mapping is needed (unless we also provide a helper for diff+mapping) and for third-party types. So I don't think that we need to provide it.