Closed michaelst closed 3 months ago
I did some initial testing with cowboy and did not run into the same issue. Happy to help provide whatever other info that would help look into this.
actually turned out this appears to be a networking issue that somehow I didn't keep consistent between the tests. I apologize for the false issue
No worries! Thanks for the note regardless!
Out of curiosity, what sort of numbers are you seeing out of Bandit once you made things consistent?
I was seeing 9.95GB of memory (reported by beam) for 260k connections (we are running in k8s and this was the same limit I ended up running into with cowboy as well, with network policies enabled this is cut to 130k for some reason). However, the k8s pod was reporting something around 14GB I think, don't have historical data captured on that.
Any numbers in specific that would be useful, the next load test I run I can capture more details
I appear to be running into a limit of about 130k connections. Once getting into that range the server stops responding to requests with nothing in the logs.
Here are some metrics showing two nodes running. The vm args are set to 1M for ports/processes. Is there potentially another limit we are hitting or a limitation in bandit?
application.ex
router
mix.lock