Open jondubois opened 7 years ago
@jondubois do you want me to write a simple plugin for this kind of like the publish out plugin?
@happilymarrieddad That sounds good!
Also, I'm writing a basic stress testing client for SC which I will use to test these plugins to see how they affect things. https://github.com/SocketCluster/sc-stress-tests
@jondubois I am new for socketcluster. node: v7.2.1 SC Server: socketcluster@9.1.3 SC Client: socketcluster-client@9.0.0
i tried to do stress test follow your sample code. Test goes well to 4k users, but find no way to increase it. i did the FD limit ajustment and also broke from 1 to 5, but no improvement.
When over 4k, i got the error below:
Test client CPUs used: 4 serverHostname: <x.x.x.x> serverPort: 8000 numClients: 5000
D:\FiscoAPP\trunk\01.src\JS_StressTest\node_modules\socketcluster-client\lib\scsocket.js:533 throw err; ^
D:\FiscoAPP\trunk\01.src\JS_StressTest\node_modules\socketcluster-client\lib\scsocket.js:533 throw err; ^
D:\FiscoAPP\trunk\01.src\JS_StressTest\node_modules\socketcluster-client\lib\scsocket.js:533
throw err;
^
SocketProtocolError: Socket hung up
at SCSocket._onSCClose (D:\FiscoAPP\trunk\01.src\JS_StressTest\node_modules\socketcluster-client\lib\scsocket.js:631:15)
at SCTransport.
No improvement by adjusted os limits as below:
i wonder how to do the same test for 1 server as you did below http://socketcluster.io/#!/performance
please kindly suggest to me the detail step.
@RubouChen Did you also increase the ulimit? Do ulimit -n
on Linux to find out what your limit currently is.
@jondubois Thank your for your early reply. ulimit -n gives 1000000 on our system (CentOS Linux release 7.3.1611). Other setting: Broker: 5 Worker: 3 node: v7.2.1 SC Server: socketcluster@9.1.3 SC Client: socketcluster-client@9.0.0
i used 2 different clients did the same test and got different results. client: windows 7 64bit ->only 4k connections OK Model: HP ProDesk 600 G2 SFF CPU: 4CPUS, Core i5-6500 @3.20GHz RAM: 12GB
client: CentOS Linux release 7.3.1611 ->16k connections OK Model: HVM domU CPU: 4CPUS, Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz RAM: 16GB
So, if there are some tuning points necessary for client side? And one more question, when i succeeded 16k connections, stopped the client by Ctrl+C, but failed to have 16k connections again. It is out of my control. Would you kindly tell me how to get stable connection?
@RubouChen If you shut down the instance and bring it back up later, by default clients should auto reconnect after some time (takes a few seconds by default; but they shouldn't all reconnect at once). See autoReconnect
options here: https://socketcluster.io/#!/docs/api-socketcluster-client
having similar kind of issue, connection not increasing from 4K, giving same error. ulimit and other sysctl parameters were updated, but no improvement in no of connections. Can you please suggest a solution, considering good VMs we are using.
I have exactly same issue. I am running server and client from sc-stress-tests tool mentioned above. Not able to go beyond 4K connections. Getting same error. The VM on which SC server is running has 8 cores with 64GB RAM. I tried to generate load from 6 VMs with each VM simulating 1000 users. Have applied all recommended sysctl, ulimit parameters. Same behaviour is observed on 2 Core, 4GB VM also. It means CPU & RAM resources are not the constraints here. Please suggest if we need to look into any specific parameters.
Please refer to this issue: https://github.com/SocketCluster/socketcluster/issues/404
I provided a solution which fixed the issue for me, see #404
Hi, so how to adapt socketcluster to my ip hosting vps ?
If a large number of users attempt to connect at the same time, it would be good if there was a way to delay some handshakes in order to spread out the load evenly over a longer period of time to avoid overwhelming the Node.js event loop.
This could be a plugin (e.g. which delays the next() invocation in MIDDLEWARE_HANDSHAKE) or it could be done internally by SC based on an option passed to the main SocketCluster constructor.