Open MLStoltzenburg opened 1 year ago
How much TPS do you produce? And why is that a bug? So far everything works as expected. There is no known memory leak. If you are creating more load that your server/node can handle, then that's your problem ;)
The spammer just connect and memory consumption starts to increase. It didn't send any message.
Yes! It's not necessary a bug in hornet! I presupposed as bug because in the docker is working very well. Sorry! :-) I used for days on docker before I start to refactor the scripts.
If you want more evidence, I'll be happy to help.
I have been using Hornet 1.2 on Kubernetes for a long time and it works great in my environment.
I made a short film showing the behavior! Hope this helps!
https://github.com/iotaledger/hornet/assets/2595026/beaf6456-5259-4c99-b2be-d86008d666f3
Hi @muXxer
I found the problem, I entered the wrong address in the indexer. I configured restApi.bindAddress with 0.0.0.0, so in this case when Spammer tries to connect to the indexer with 0.0.0.0 which is a wrong endpoint, after placing the correct endpoint the memory consumption problem did not occur.
I corrected my script, but in your opinion, is this a possible problem in the Hornet?
Thanks!
Hi, I am migrating the one-click-tangle to hornet 2.0.1, the migration was very well, but the hornet 2.0.1 is unstable in the kubernetes. I needed to increase the memory of my nodes, but the problem is occuring yet.
The nodes had 16Gi each and now 32Gi each.
I needed to set vm.overcommit_ratio=90 and the hornet establed for a while, but after the comsumption of memory was unstable until I get OOM again.
Event error: "Memory cgroup out of memory: Killed process 180904 (hornet) total-vm:12642108kB, anon-rss:9701844kB, file-rss:43452kB, shmem-rss:0kB, UID:65532 pgtables:19416kB oom_score_adj:984"
P.S.: The problem happens after the spammer connects to Hornet 2.0.1
Expected behavior Hornet 2.0.1 stable memory consumption.
Environment information:
Additional context
Hornet Deployment
Config.json
Spammer
Config Spammer