When enabling compression in our production application we're seeing a fairly large increase in memory allocations and GC duration. Based on some profiling I was able to narrow it down to the new creation of the zlib writers and readers that are thrown away after every packet.
I've implemented re-use of the writers using sync.Pool along with a few other changes, such as not creating a zilb Writer when the packet data size is less then minCompressionLength to help alleviate the pressure we see on the garbage collector.
I've been running my for in production for a few days now making SQL calls at a rate of about 1000/s and the application has been much more stable and we've been able to scale down our cluster size due to the fewer CPU cycles we're spending doing GC.
When enabling compression in our production application we're seeing a fairly large increase in memory allocations and GC duration. Based on some profiling I was able to narrow it down to the new creation of the zlib writers and readers that are thrown away after every packet.
I've implemented re-use of the writers using
sync.Pool
along with a few other changes, such as not creating a zilb Writer when the packet data size is less then minCompressionLength to help alleviate the pressure we see on the garbage collector.Here is a benchmark for before my changes:
and this is the benchmark after the changes.
I've been running my for in production for a few days now making SQL calls at a rate of about 1000/s and the application has been much more stable and we've been able to scale down our cluster size due to the fewer CPU cycles we're spending doing GC.