Open dmonad opened 4 years ago
My understanding is that in the performance
branch of Automerge they mostly implement the new binary encoding, hence maybe a better name for that branch would be encoding
.
With the improvements in that branch they solve the network encoding problem:
But that is only one problem of the three problems Automerge has. The main problem is not the network overhead (you can just gzip the old JSON format and tweak the JSON encoding a bit, and results would be OK for the network).
The other two problems (I believe the main ones) are high RAM usage and slow performance. As I understand, the root of those two problems is that Automerge stores each inserted character separately.
This results in huge RAM usage:
And slow updates to create those character-by-character lists:
Both of those mean that Automerge is not usable on the server and on the client-side it is very heavy as well.
I think the authors are aware of that and are working on it.
This PR uses Automerge's
performance
branch as a dependency https://github.com/automerge/automerge/pull/253.Since this does not install from npm we need to build Automerge distribution files before running the benchmarks:
As mentioned in Automerge's performance PR, the performance (regarding time complexities) is actually worse than Automerge's current implementation. But the encoder is much more efficient than before.
The following benchmark results compare Yjs's new encoder with Automerge's new encoder (both based on columnar encoding):