Open andrein opened 5 months ago
Adding the profiles that helped us track down this issue: profiles.zip
Seems this is a know issue https://github.com/golang/go/issues/32371
What server version was used for the profiles?
Hi @derekcollison, the profiles were done on the nats:2.10.16-alpine image.
The server does not default to compression on for websockets but the helm chart does. We are going to change that, compression should be opt-in. /cc @caleblloyd
PR was opened yesterday: https://github.com/nats-io/k8s/pull/912
I see that https://github.com/klauspost/compress is mentioned as an alternative in the upstream ticket, and it's already a dependency of nats-server. Would it be worth changing the implementation for the websocket compression?
Released: https://github.com/nats-io/k8s/releases/tag/nats-1.2.0
Anything else to do here or we can close?
Observed behavior
When upgrading from nats 2.9.20 (using helm chart 0.19.17) to 2.10.16 (using helm chart 1.1.12) we noticed a 10x memory usage increase on our cluster.
We tracked this down to the new chart enabling websocket compression by default.
The cluster has ~60k clients, most of them connecting over websockets. Memory usage increased from ~3GB/node to 30+GB/node.
Expected behavior
Not sure how to answer this, I would've expected some impact, but 10x memory usage caught us by surprise.
Server and client version
2.9.20 and 2.10.16 are affected, probably others too.
Host environment
Cluster running on Kubernetes using containerd.
Steps to reproduce