Open ivancoppa opened 5 years ago
We are experiencing the same issue, we have configured dynomite to use the following config: MAX_MSGS: 100000 MBUF_SIZE: 8192 According to our calculations, dynomite should not consume more than ~800 MB of RAM, but in reality it does consume a lot more than that(1.4Go) and counting... up to the point where it is killed by the OOM killer. Any help on this issue will be appreciated
@ivancoppa @lhucinequr I will investigate this and get back.
A quick update concerning this issue: We have done few more tests: We are running dynomite( v0.7.0) in simple 3 node configuration cluster with the following configuration in each node:
dyn_o_mite:
datacenter: bgl
rack: rack1
dyn_listen: 0.0.0.0:8101
dyn_seed_provider: simple_provider
dyn_seeds:
- dynomite-bench-node-2:8101:rack2:bgl:bglr2n1
- dynomite-bench-node-3:8101:rack3:bgl:bglr3n1
listen: 0.0.0.0:8102
servers:
- 127.0.0.1:6379:1
tokens: 'bglr1n1'
pem_key_file: /usr/local/etc/dynomite/dynomite.pem
data_store: 0
stats_listen: 0.0.0.0:22222
mbuf_size: 4096
max_msgs: 100000
read_consistency : DC_ONE
write_consistency : DC_SAFE_QUORUM
According to the documentation, dynomite should not use more than 4096*100000 bytes = 390.625 MB of memory, but after running the cluster for 2 days, each node now uses roughly 655300 KB = 639.94140625 MB (according to http://localhost:22222/info )
curl -s http://localhost:22222/info | jq .dyn_memory
655300
I'm pretty sure this trend will continue untill all memory is consumed and the dynomite process is killed by the OOM.(we use docker and we set a hard memory limit on each node). Are we doing something wrong? is this the expected behavior?
@smukil Any updates? Did you had time to investigate the issue?
@ivancoppa I was unable to track down any leaks. I'll get back to it in a bit and try to have an update soon.
Hey all - we just started using dynomite and ran into this issue as well. I was able to track it down and put in a PR #710 to get it resolved.
Thanks @kjlaw89, I will try your patch
We tested dynomite with version 0.6.15 and with branch rel_0.6_prod and we experienced this issue as well. with mbuf_size: 16k max_msgs: 100000 dynomite went over 2GB of memory usage. More than that, mem consumption slightly increased even at rest.
Now we are starting a new test with the patch suggested by @kjlaw89
Hi, I'm testing dynomite (v0.7.0) with the goal of having all the data written across 3 datacenters without any sharding.
What I'm experiencing is a massive amount of memory usage and I'm trying to understand what I'm missing.
I did not change the default values for mbuf_size and max_msgs so what I was expecting was something like 3.05GB of memory usage top consindering the dafault values.
What I'm currently seeing is:
dyn_memory: 7.5G
top:
I'm currently testing this configuration of dynomite.
Image of the memory usage over time