Open sacOO7 opened 5 months ago
➤ Automation for Jira commented:
The link to the corresponding Jira issue is https://ably.atlassian.net/browse/SDK-4077
We can either write our own load testing script https://medium.com/@yusufenes3494/how-to-build-your-own-load-test-a-step-by-step-guide-1a8367f7f6a2 Or One of the way is to clone python-locust project and modify https://github.com/locustio/locust/blob/master/locust/clients.py file to use httpx client instead of python-requests. This will help us to use all features of locust client like result tracking on web UI, without writing explicit code for the same.
We will need a dedicated ably RDP server to run this script against an ably limitless account : )
I don't see any point in generalizing this load tester client, since this is a library specific issue. Most of the times, http client libraries are stable and they are not required to be tested. Load testing is specifically done to test servers under load and not client itself.
Also, it doesn't make sense to load test clients across different SDK's since every client is written in different language and we will need to write the script for the same, every language has different metrix in terms of performance. Currently, we can just focus on ably-python
and check if it works as expected.
Load testing TPS graph given at https://blog.devgenius.io/10-reasons-you-should-quit-your-http-client-98fd4c94bef3
Test with both servers
Test with requests and niquests -> SingleTon Run in both sync/async mode
Created a separate load test repo. to run load test using locust.
Executed GET
request against https://rest.ably.io/time
with 10 users using singleton instance of requests and httpx
PS. Average response time for browser request to https://rest.ably.io/time
is ~66ms.
Test conducted on intel i7-11800h, 16 gb ram windows machine.
For 100 users, httpx
is literally crying here with several bumps wrt number of requests made.
Comparing it with python-requests
, we get much stable graph with lowest possible latency for requests made
For python-requests => pool_connections
=pool_maxsize
= 100
and for httpx => httpx.Limits(max_keepalive_connections=100, max_connections=100, keepalive_expiry=120)
For 50 users, httpx
is showing bumps again with several bumps wrt number of requests made.
Average response time is ~90ms.
Comparing it with python-requests
, we get much stable graph with lowest possible latency for requests made
Average response time is ~70ms.
@ttypic Until we get proper resolution from httpx, we can suggest documentation https://www.python-httpx.org/advanced/#pool-limit-configuration. Devs can adjust this limits according to their load requirements. This doesn't guarantee full stability though will reduce spikes in requests made.
> https://ablyreal-time.slack.com/archives/C030C5YLY/p1698238346488369?thread_ts=1697535419.670029&cid=C030C5YLYDo load testing on a dedicated server using
httpx
> 0.24.1 and http2 ( ablypython v2.0.3 )httpx
> 0.24.1 and http1.1 ( ablypython v2.0.3 )httpx
> 0.25.2 and http2 ( ablypython v2.0.4 )httpx
> 0.25.2 and http1.1 ( ablypython v2.0.4)┆Issue is synchronized with this Jira Task by Unito