Open guerro323 opened 5 years ago
Some proposal about how the snapshot entities should be generated for clients (I didn't show how it work in local yet)
// This is a small example of how the entities data for the delta of a snapshot should be generated for a client.
// I didn't show how the local generated snapshot version work yet (but we register everything here, there is no delta for local).
var clientSnapshotData = new DataBufferWriter(Allocator.Temp);
// Generate other data here...
// Write snapshot data from entities...
var entityCountMarker = default(DataBufferMarker);
var entityFound = default(int);
entityCountMarker = clientSnapshotData.CpyWrite(0);
foreach (var entity in Entities)
{
var entityHeaderWritten = default(bool);
var processorCountMarker = default(DataBufferMarker);
var processorFound = default(int);
var entityProcessors = GetProcessorsForEntity(entity);
foreach (var entityProcessor in entityProcessors)
{
// WriteEntityDeltaChange() is an extension method.
var deltaChange = entityProcessor.WriteEntityDeltaChange(entity, entity);
if (!deltaChange.HasChange)
continue; //< Instead of a continue; we should just write it to the local Snapshot Data (NOT THE ONE SENT TO THE CLIENT.)
// It would be useless to write the entity if there was no processor found, right?
if (!entityHeaderWritten)
{
entityHeaderWritten = true;
clientSnapshotData.Write(ref entity);
processorCountMarker = clientSnapshotData.CpyWrite(0);
clientSnapshotData.Write(ref entityFound, entityCountMarker);
entityFound++;
}
// Overwrite
clientSnapshotData.Write(ref processorFound, processorCountMarker);
processorFound++;
clientSnapshotData.Write(ref entityProcessor.Id);
clientSnapshotData.AddBuffer(deltaChange.Data);
}
}
clientSnapshotData.Dispose();
There are some changes in the yaml structure for the entities category:
Snapshot:
# No change to before...
EntityCount: 2 # How much entities need to be processed (DynInteger)
- Entity: [1, 5] # We do a forloop inside the entities array (that the client registered (including new ones)).
- ProcessorCount: 5 # I doubt there are going to be 1 billion processors, a byte will be sufficient
- ProcessorId: 5
DataLength: 9
- ProcessorId: 4
DataLength: 12
#...
- Entity: [8, 1]
- ProcessorCount: 2
- ProcessorId: 4
DataLength: 12
- ProcessorId: 8
DataLength: 62
The main post will be updated once I find the best way to do it.
I'm currently integrating the same snapshot system in both of my game, I think I could finally integrate it into the network package.
yaml structure:
FullSnapshot:
GameTime:
- Frame: 32
Tick: 110025 # Based on Environnement.TickCount
Time: 18.36 # Based on Unity Time.time
DeltaTick: 1
DeltaTime: 0.001
Entities:
- Length: 8
- SnapshotEntityInformation:
- Entity: [1, 5]
ModelId: 2
#...
Processors:
- Length: 3
Processor:
- Id: 6
DataLength: 96
Processor:
- Id: 7
DataLength: 104
#...
The scripts for making snapshots are now integrated in the main package. I'll soon close this issue once I'll update documentations.
done
Because of how Unity is progressing with the new DOTS packages, I can make some cleaning in the snapshot system. The system SnapshotEntityDataManualStreamer
will be deleted.
The entity streamer will now not manage the components existence, there will be instead a patcher. This is done to reduce snapshot size drastically (eg: if you have a lot of game components) and upgrade the performance for the streamers.
The incoming snapshot system called 'Revolution' will change how the stream format is done. It's a hybrid between a component-based and archetype-based snapshot system. The archetype of an entity can be dynamic. But most users will don't know that it's archetype-based.
There is for now three way to write and read to a snapshot:
The snapshot format: https://github.com/StormiumTeam/package.stormiumteam.networking/blob/revolution_prototype/Documentation/References/snapshot_format.md
uint entity_update_count; // How many entities had their archetype changed?
packed_uint("tick");
// -- 'entity_count' is set to 0 if no ghosts were added or removed
packed_uint("entity_count");
if (entity_count > 0) {
write_missing_archetype();
// -- temporaly used until unity fix the bug after a writer.flush()
byte("seperator"); //< 42
uint previousGhostId;
uint previousArchetypeId;
// The client will know if a ghost was added or removed.
foreach (ghost in ghostArray) {
packed_uint_delta("ghost_id", previousGhostId);
// -- for now, we write the archetype of all ghosts
// -- in future it will be optimized to only write the changed ghosts.
packed_uint_delta("ghost_arch", previousArchetypeId);
previousGhostIndex = ghost.id;
previousArchetypeId = ghost.arch;
}
} else if (entity_update_count > 0) {
packed_uint("entity_update_count");
// -- be sure to only read incoming archetypes if 'entity_update_count' is superior than 0!
write_missing_archetype();
// -- temporaly used until unity fix the bug after a writer.flush()
byte("seperator"); //< 42
uint previousGhostIndex;
uint previousArchetypeId;
// We use the index instead of the id, so delta compression will do a better job here.
foreach (change in entity_update) {
packed_uint_delta("ghost_index", previousGhostIndex);
packed_uint_delta("ghost_arch", previousArchetypeId);
previousGhostIndex = change.ghostIndex;
previousArchetypeId = change.arch;
}
}
// -- Write the rest of the data from systems
system_snapshot_data();
write_missing_archetype() {
packed_uint("new_archetype_count");
uint previousArchetypeId;
foreach (arch in new_archetypes)
{
packed_uint_delta("arch_id", previousArchetypeId);
packed_uint("arch_system_count");
foreach (system in arch.systems)
{
packed_uint("system_id");
}
previousArchetypeId = arch.id;
}
}
The update of Revolution on GameHost bring some new changes on the snapshot format. One of the biggest change is how the client know this data is a remake (aka full-recreation) or not. There is also way more informations when serializing a ghost (local, remote information).
Also a very important note is that we send the entity as an identifier instead of a generated ghost id.
var tick = uint();
var isRemake = bool();
if (isRemake) {
archetypes_data();
entities();
// -- Removed entities
uint prevLocalId;
uint prevLocalVersion;
var removedCount = uintD4();
while (removedCount-->0) {
var localId = uintD4Delta(prevLocalId);
var localVersion = uintD4Delta(prevLocalVersion);
prevLocalId = localId;
prevLocalVersion = localVersion;
}
} else {
archetypes_data();
entities();
}
// -- Write the rest of the data from systems
while (!isFinishedReading) {
var systemId = uintD4();
var length = uintD4();
// ...
var systemData = new byte[length];
readArray(systemData, length);
}
archetypes_data() {
var newArchetypeCount = uintD4();
uint previousArchetypeId;
foreach (arch in new_archetypes)
{
uintD4Delta(arch.Id, previousArchetypeId);
uintD4(arch.systems.count);
uint previousSystemId;
foreach (system in arch.systems)
{
uintD4Delta(system.Id, previousSystemId);
previousSystemId = system.Id;
}
previousArchetypeId = arch.id;
}
}
entities() {
// Delta variables
uint prevLocalId;
uint prevLocalVersion;
uint prevRemoteId;
uint prevRemoteVersion;
uint prevArchetype;
int prevInstigator;
var updateCount = uintD4();
while (updateCount-->0) {
var localId = uintD4Delta(prevLocalId);
var localVersion = uintD4Delta(prevLocalVersion);
var remoteId = uintD4Delta(prevRemoteId);
var remoteVersion = uintD4Delta(prevRemoteVersion);
var archetype = uintD4Delta(prevArchetype);
var instigator = uintD4Delta(prevInstigator);
prevLocalId = localId;
prevLocalVersion = localVersion;
// ^ do the same for prevRemoteId, prevRemoteVersion... prevInstigator.
}
}
(this issue need to be edited in the future to use the new roadmap issue format)
Needed in the snapshot system: -> Compression (there is already packet compression) -> Compatibility with ECS -> Possibility to replay snapshots (in realtime or for making demos (like in quake))
First proposal: A snapshot should be streamed like this:
A system who would want to manage a snapshot, just need to implement one or both interfaces:
Example system (putting both interface is a bad idea btw):