Open renhiyama opened 2 years ago
Hello @renhiyama ,
Do you have those errors on every request, or do they just occur from time to time?
I need more info from you:
-c
parameter in ab)ulimit -n
Regards, Leonid
I used apache bench: ab -n 100000 -c 1000 http://127.0.0.1:8000
My pc is 11yrs old now, running Ubuntu 21.10 , amd64
I know that this was probably a lot of load to test, but I want to know what does that error mean actually.
Also thanks for the fast reply :D Lemme know if I can help with any more info
@renhiyama ,
I want to know what does that error mean actually.
When a new connection comes in, oatpp will spawn a thread to process it.
Then oatpp will try to assign it to one of the CPUs (which is not necessary but sometimes gives a performance boost).
If the call assignThreadToCpu
fails - it prints the error message but execution continues just fine.
I used apache bench:
ab -n 100000 -c 1000 http://127.0.0.1:8000
Well, it's not a big load at all. However, (which is not related to the original issue) you might run out of ephemeral ports since you don't use keep-alive option.
try to run it with -k
option:
ab -k -n 100000 -c 1000 http://127.0.0.1:8000
Also, can you please post the whole output of the test here. I'm interested in how often that error is occurring for you.
Ok, will check it out tomorrow! 😊
Hey lgan, I had a doubt, if people comes to know that by not using keep alive option, the server will respond more slowly, so a ddos attack can be more successful without keep alive 🤔. Also the error happens everytime, starts from 4000 count of ddos to 8000. I will send the full log tommorow, I forgot about this issue for these 3 days, also I was trying out copilot so 😅 I forgot.
Hey @renhiyama ,
I had a doubt, if people comes to know that by not using keep alive option, the server will respond more slowly, so a ddos attack can be more successful without keep alive 🤔.
It's actually a client issue, not the server. When you connect to the same server from one machine you can easily exhaust all ephemeral ports on that machine because when a connection is closed it is still needed some time for the ephemeral port to be freed.
When running ab
without keep-alive option you'll exhaust all ephemeral ports and you'll notice that your test at some point will start working very slow. However, there will be a small load on a server - because it's the client who isn't able to establish a new connection.
I will send the full log tommorow, I forgot about this issue for these 3 days, also I was trying out copilot so 😅 I forgot.
No problem, we all are busy:) Yep, please send the full log:)
$ ab -n 100000 -c 1000 -k http://127.0.0.1:3000/
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests
Server Software: oatpp/1.3.0
Server Hostname: 127.0.0.1
Server Port: 3000
Document Path: /
Document Length: 33 bytes
Concurrency Level: 1000
Time taken for tests: 4.882 seconds
Complete requests: 100000
Failed requests: 0
Keep-Alive requests: 100000
Total transferred: 14900000 bytes
HTML transferred: 3300000 bytes
Requests per second: 20483.90 [#/sec] (mean)
Time per request: 48.819 [ms] (mean)
Time per request: 0.049 [ms] (mean, across all concurrent requests)
Transfer rate: 2980.57 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 10.9 0 122
Processing: 0 47 56.5 35 1412
Waiting: 0 47 56.5 35 1412
Total: 0 48 59.0 35 1501
Percentage of the requests served within a certain time (ms)
50% 35
66% 43
75% 54
80% 66
90% 89
95% 112
98% 154
99% 240
100% 1501 (longest request)
Seems like keep alive stops from error/warnings
There seems to be no error logs when using -k
But there is when not using -k
If its a client side error, I want to know why the server is logging those errors... Also can I get the line and file name where this error generates ? That would be helpful since I use a different logger service now
Hold up, wait a min! There's a plot twist! By just removing the environment codes (those debugging and stuff which was told to remove before taking out in production), the ram used, response time during ddos and overall is half the size and time now! Cheers!!
$ ab -n 100000 -c 1000 http://127.0.0.1:3000/
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests
Server Software: oatpp/1.3.0
Server Hostname: 127.0.0.1
Server Port: 3000
Document Path: /
Document Length: 33 bytes
Concurrency Level: 1000
Time taken for tests: 17.292 seconds
Complete requests: 100000
Failed requests: 0
Total transferred: 14400000 bytes
HTML transferred: 3300000 bytes
Requests per second: 5782.93 [#/sec] (mean)
Time per request: 172.923 [ms] (mean)
Time per request: 0.173 [ms] (mean, across all concurrent requests)
Transfer rate: 813.22 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 26 38.2 4 231
Processing: 13 147 68.3 137 522
Waiting: 0 136 66.1 135 520
Total: 59 172 67.8 142 538
Percentage of the requests served within a certain time (ms)
50% 142
66% 168
75% 193
80% 210
90% 269
95% 310
98% 378
99% 425
100% 538 (longest request)
$ ab -n 100000 -c 1000 -k http://127.0.0.1:3000/
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests
Server Software: oatpp/1.3.0
Server Hostname: 127.0.0.1
Server Port: 3000
Document Path: /
Document Length: 33 bytes
Concurrency Level: 1000
Time taken for tests: 4.454 seconds
Complete requests: 100000
Failed requests: 0
Keep-Alive requests: 100000
Total transferred: 14900000 bytes
HTML transferred: 3300000 bytes
Requests per second: 22452.78 [#/sec] (mean)
Time per request: 44.538 [ms] (mean)
Time per request: 0.045 [ms] (mean, across all concurrent requests)
Transfer rate: 3267.06 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 14.6 0 194
Processing: 0 42 22.7 37 693
Waiting: 0 42 22.7 37 693
Total: 0 43 28.1 37 693
Percentage of the requests served within a certain time (ms)
50% 37
66% 41
75% 46
80% 53
90% 70
95% 96
98% 121
99% 148
100% 693 (longest request)
With keep alive option, the minimum time decreased so much! (While the longest time also increased a little)
Can I know what happened? Thanks in advance.