Closed erlangparasu closed 4 years ago
Not sure about a proper way to do this, but I tested this the help of couple of friends using https://artillery.io/ with following config test-ws.yaml
config:
target: "wss://ws.example.com/app/yourPusherAppKey?protocol=7&client=js&version=4.3.1&flash=false"
phases:
- duration: 600
arrivalRate: 10
name: "Ramping up the load"
scenarios:
- engine: "ws"
flow:
- loop:
- think: 29 # pusher expect ping event every 30 seconds or it disconnects the client
- send: '{"event":"pusher:ping","data":{}}'
We each got a copy of the above script and just let it run. I monitored the server from my end to see how many connections it allowed. We ended testing when we were able to reach around 7000 or 8000 concurrent connection, which was enough for our needs, but there is not reason to doubt it would support many more
@edvordo Testing a ping is not the same as testing a subscribe to channel event. You might want to test by sending a subscribe event.
I needed to know how many concurrent connections the server can handle. That is what the script does, it created and maintained 7k+ active connections.
Why should I subscribe to some channel?
@edvordo Will your real-world clients only be pinging the server or are they going to subscribe to a channel (or multiple channels) and listen for changes? 7k idle (ping every 29 seconds) connections is not the same as 7k subscribed connections that are receiving and/or sending data.
I didn't need to stress test the traffic between client and server, just whether it can hold thousands of people at the same time. I'd assume that's pretty much what the docs test was also doing.
But I get that actual traffic between clients and server will affect the server and how many users it can handle. Then again, most likely not by a whole lot (like halve it), I'd imagine, feel free to correct me if I'm wrong.
As I said in my first answer, I'm not sure if the way I did it is the proper way, but it gave me some results and can point users in a direction that can give them results, the script can be extend to do almost anything :)
@edvordo So we're trying to replicate the success you've had with your test and are using the same set up (we used a slightly different one before) and we're not able to break 1.7k concurrent connections without errors in the artillery output. We seem to be getting a lot of 504s and if we open the webpage during the test the socket connection fails. We start getting errors at 1.7k connections and then every 10 seconds we get around a ~100 errors for every ~100 scenarios launched.
We've already increased the maximum number of file descriptors on our server (an EC2 m5n.2xlarge).
Do you have any advice on what else we could try?
I am assuming your are referring to either one of these settings
minfds=10240; (min. avail startup file descriptors;default 1024)
ulimit -n 10000
Both are probably good.
Not completely sure about your setup, but if you are using nginx, I also increased the setting mentioned here https://beyondco.de/docs/laravel-websockets/basic-usage/ssl#nginx-worker-connections
But even with that, I was at the point you were, a bit over 1000 connections, then a 504 and the web was not accessible from browser. What finally solved it was installing ev
via pecl
as suggested here
https://beyondco.de/docs/laravel-websockets/faq/deploying#changing-the-event-loop
This setup pushed us over the 1k connections to 7k+
Hope it helps.
Hey @edvordo, thank you for the answer. It seems that what solved it for us was increasing the soft and hard file limits in the global limits file (/etc/security/limits.conf
) rather than the custom file for the forge user. The entries we added are
forge soft nofile 100000
forge hard nofile 1000000
and now we're able to break 20k+ connections at 10 new connections per second (arrival rate). We encounter errors again when we increase the numbers to 30 connections per second (arrival rate), just fyi.
Glad you were able to solve your issue :)
Also, thanks for letting me know, that higher arrival rate might break it again. The project I tested is no longer active, but will keep that in mind for any future projects.
how can i do this configuration in docker?
With docker you have full control over the configuration, just set up whatever you need in Dockerfile(s) and build you image(s).
That being said, this project, despite a promise made 9 months ago, seems to be abandoned, so searching for a different solution may not be a bad idea.
I'm in windows and use of Apache I've written a small simulator to open multiple webpages that each of them contain a listener for my channel. It goes correct until we come to 200 connections, after that (for example when we reach to 280 connections), Simulator won't deliver any message to clients.
I use of
ulimit -n 1000
command, but the problem doesn't solve!
I know there is a limitation with chrome and other browser like Firefox and Edge that each of them suport around 200 tab, because of that I made a page with 20 iframes to establish 20 connections per tab and so 2000 connection on 100 tabs, but server lost its ability after 10 tabs!
https://beyondco.de/docs/laravel-websockets/faq/scaling how do you test the max connection that success?