The current two options, zlib and fastlz is basically a choice between performance and compression ratio. You would choose zlib if you are memory-bound and fastlz if you are cpu-bound.
With zstd, you get the performance of fastlz with the compression of zlib. And often it wins on both. See this benchmark I ran on json files of varying sizes:
Taking just a 40k json blob, we see that zstd at compression level 3 reduces it to 8862 bytes. Our current zlib 1 gets worse compression at 10091 bytes and takes longer both to compress and decompress.
The fact that decompression a dramatically faster with zstd is a win for most common memcache uses since they tend to be read-heavy.
The PR also adds a memcache.compression_level INI switch which currently only applies to zstd compression. It could probably be made to also apply to zlib and fastlz.
This adds zstd compression support.
The current two options, zlib and fastlz is basically a choice between performance and compression ratio. You would choose zlib if you are memory-bound and fastlz if you are cpu-bound. With zstd, you get the performance of fastlz with the compression of zlib. And often it wins on both. See this benchmark I ran on json files of varying sizes:
https://gist.github.com/rlerdorf/788f3d0144f9c5514d8fee9477cbe787
Taking just a 40k json blob, we see that zstd at compression level 3 reduces it to 8862 bytes. Our current zlib 1 gets worse compression at 10091 bytes and takes longer both to compress and decompress.
The fact that decompression a dramatically faster with zstd is a win for most common memcache uses since they tend to be read-heavy.
The PR also adds a
memcache.compression_level
INI switch which currently only applies to zstd compression. It could probably be made to also apply to zlib and fastlz.