fritzy / SleekXMPP

Python 2.6+/3.1+ XMPP Library
http://groups.google.com/group/sleekxmpp-discussion
Other
1.1k stars 299 forks source link

what throughput can I expect? #375

Closed coffeeowl closed 9 years ago

coffeeowl commented 9 years ago

Hello,

I am testing my server component which underperforms, i.e. a burst of 800 simultaneous requests kills it for sure. So, I have setup a simple load test. 100 bots connect to a server (ejabberd) and send requests to the component. The original requests were Ad-Hoc commands, after the first test I changed it to messages for simplicity.

What I find amusing is that the component can't handle 100 bots. Each bot randomly, in interval 1..5 seconds sends a request. When the component replies with just "YES" string, the response time stays constant and < 100ms, but even simple JSON parsing of a small string (around 100 chars) inside message handler results in an overload, in a couple of minutes response time goes up to 10s and continues to increase.

Another interesting thing is that even after I stop my test and try to query the component myself (in pidgin), it still takes the same amount of time for component to reply, i.e. 10s or more. I though that some queue might be still full, but even after waiting for some time, the response time doesn't go down. Only after component restart timing goes down to < 100ms. I have observed the same behaviour with prosody.

Am I missing something?

coffeeowl commented 9 years ago

OK, I'll talk to myself here. So far the bottleneck seems to be ejabberd itself. Send queue in sleekxmpp is almost empty all the time, yet it takes ~20s to complete a simple ad-hoc command under what I can call a tiny load - 500 bots running the same command every second.

bear commented 9 years ago

You have discovered the most frustrating part of XMPP bot work - it is very dependent on the server you work with.

Apologies for not replying to this earlier.

I'm going to mark this as a Question and then close the issue.