AmadeusITGroup / Redis-Operator

Redis Operator creates/configures/manages Redis clusters atop Kubernetes
MIT License
167 stars 62 forks source link

Max memory setting ignores container overhead #48

Open showermat opened 5 years ago

showermat commented 5 years ago

When I set a memory limit for Redis pods using the provided redis-cluster chart, it appears to take the exact number I provide for the memory limit and place that in redis.conf on the maxmemory line. This neglects the overhead of Redis itself, the redisnode executable, and any other miscellanea that Kubernetes includes in its overhead accounting. In my setup, this adds up to about 40 MB that the container will use beyond Redis's memory limit. Thus, when Redis approaches capacity, the container's memory usage exceeds its configured resource limit, and Kubernetes kills the pod. I've been working around this by manually updating Redis's maxmemory to 100 MB less than the resource limit, which solves the problem. It would be nice if this could be automated so that resource limits work as expected.

007 commented 5 years ago

@showermat unforutnately the operator will need to do much better than that for long-term stability. Redis admin docs specify:

If you are using Redis in a very write-heavy application, while saving an RDB file on disk or rewriting the AOF log Redis may use up to 2 times the memory normally used. The additional memory used is proportional to the number of memory pages modified by writes during the saving process, so it is often proportional to the number of keys (or aggregate types items) touched during this time. Make sure to size your memory accordingly.

So for 100MB of data capacity in Redis, you'll need to specify 200Mi in the k8s resource limit, and potentially 240Mi as per your use-case to be 100% safe. If you need 1GB of data capacity then the scale remains (1GB -> 2GB), but the overhead (~40MB) is pretty consistent.

Unless you're going crazy with writes you're probably okay with simple doubling for your resource limit, since that will allow for the binary overhead and will have spare capacity padding for approximately doubling the touched-keys allocations.