Closed mrrooijen closed 4 years ago
I think they're available on MacOS. I did find that lz4 was apparently installed via homebrew, which I manually uninstalled, after which I was still able to build the C extension. Unsure about Windows.
Alternatively compression could be opt-in. Though, having these optimizations out of the box is nice, as long as they don't cause issues.
Alternatively compression could be opt-in.
Long term, I would like to have different backends for serialization (although I am not sure that is the right name for it). The goal of this class is really to map a component to a string and recover a component from that string. You can imagine another backend that would cut the size down to a fixed length by using an external datastore like Redis.
@mrrooijen This looks good to me, but I see you still have the PR marked as draft. Is there a reason that this should not be merged?
@alecdotninja Nope. I've removed the draft status so you can go ahead and merge it now. 👍
Add compression to the serialization pipeline.
https://github.com/unabridged/motion/issues/33#issuecomment-657567502
I came across this comment and wondered if we could reduce the amount of data that's being sent over the wire. Using a compression library would allow us to reduce bandwidth usage by quite a bit.
Real-world tests (web console/network tab):
Deflate
Inflate
While the deflation speed of zlib is reasonable, the inflation speed turns out to be terrible.
Using a more performant compression library (lz4) it's possible to achieve fast (de)compression while retaining reasonable compression rates. With lz4 deflation time goes down by 80% and inflation time goes down by 99% compared to zlib. Here's a general benchmark comparing lz4 to other compressors https://github.com/lz4/lz4/blob/dev/README.md#benchmarks
Thoughts?
Note: the branch is called zlib because I initially wanted to use zlib, but it turned out that the overall performance wasn't adequate, especially when dealing with web sockets where you want fast responses.