Open LukeStanbery89 opened 9 years ago
+1
+1
Yes this is an issue.
This is purely due to how node-ddp-client
works.
If we've more ddp-websocket connection, node process can't work well.
So, best way is to use multiple process to distribute the load. At kadira, we use heroku to initiate a very large load test. In our experience we use 100-200 clients per process. But that also depends on the data you subscribe and other factors.
@arunoda that totally makes sense, thanks for the clarification.
@arunoda could you clarify what fails when concurrency
is too high?
I tried using 10 processes of the my_load_test.js
example from the README with concurrency
set to 100 (so, 1,000 total concurrency), but eventually these meteor-down processes die, returning an exit code of 8 and printing the same message @LukeStanbery89 took a screenshot of:
/tmp/stress/client/node_modules/meteor-down/lib/mdown.js:47
if(error) throw error;
^
Network error: ws://localhost:3000/websocket: connect EADDRNOTAVAIL
@arunoda When you do a large load test, do you use a different machine for each meteor-down process?
Do you think the OS is running out of client-side sockets? ...ah, indeed, when I run ulimit -n 4096
I can sustain concurrency
of 1,000 from one meteor-down process on on machine (again, using the my_load_test.js
example). I'm on 64-bit Ubuntu 14.04.3 LTS.
The next client-side limits to hit will probably be available local ports and fin timeouts.
This may be due to that, you are destroying and connecting a lot of clients in the short time. We normally run only 10-30 concurrency.
And we use multiple instances like that to scale.
Hello,
I have been using Meteor Down to load test a Meteor application for my company, but I have found that when I exceed 6000 concurrent users or so, Meteor Down begins throwing websocket errors. I Googled these errors and I have only found that these occur because Meteor Down is trying to use ports that are already in use. I am not sure if this is a Meteor Down issue, but I was hoping to find out if there is a workaround/fix for this.
Any help is appreciated.