Open zhang-wenchao opened 1 year ago
Compare with a Golang Gin service
Requests per second: 14790.47 [#/sec] (mean)
$ ab -n 10000 -c 1000 http://0.0.0.0:8080/v1/util/time
This is ApacheBench, Version 2.3 <$Revision: 1903618 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 0.0.0.0 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 0.0.0.0
Server Port: 8080
Document Path: /v1/util/time
Document Length: 53 bytes
Concurrency Level: 1000
Time taken for tests: 0.676 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 4660000 bytes
HTML transferred: 530000 bytes
Requests per second: 14790.47 [#/sec] (mean)
Time per request: 67.611 [ms] (mean)
Time per request: 0.068 [ms] (mean, across all concurrent requests)
Transfer rate: 6730.82 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 29 6.5 30 46
Processing: 14 35 9.4 34 65
Waiting: 1 25 8.2 24 58
Total: 33 64 5.9 64 82
Percentage of the requests served within a certain time (ms)
50% 64
66% 65
75% 67
80% 67
90% 72
95% 76
98% 77
99% 78
100% 82 (longest request)
use sample api route
Requests per second: 908.80 [#/sec] (mean)
import { NextResponse } from 'next/server';
export async function GET() {
return NextResponse.json({ "test":"test" });
}
$ ab -n 10000 -c 1000 http://192.168.100.10:3000/api
This is ApacheBench, Version 2.3 <$Revision: 1903618 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.100.10 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 192.168.100.10
Server Port: 3000
Document Path: /api
Document Length: 15 bytes
Concurrency Level: 1000
Time taken for tests: 11.003 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 2170000 bytes
HTML transferred: 150000 bytes
Requests per second: 908.80 [#/sec] (mean)
Time per request: 1100.347 [ms] (mean)
Time per request: 1.100 [ms] (mean, across all concurrent requests)
Transfer rate: 192.59 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 6.9 0 32
Processing: 43 1056 189.7 1003 2045
Waiting: 10 1055 189.7 1003 2045
Total: 43 1058 193.1 1004 2059
Percentage of the requests served within a certain time (ms)
50% 1004
66% 1026
75% 1059
80% 1115
90% 1224
95% 1458
98% 1828
99% 1947
100% 2059 (longest request)
I think this is a problem, because this performance is extremely low.
The simple API request should have a faster speed, and the results are not. There must be any problems here.
I think I know where the problem is and the solution to the problem.
I started two services with the same configuration, just different port numbers different.
config nginx.conf
# 添加socket转发的代理
upstream socket_proxy {
# 转发的目的地址和端口
server 192.168.100.10:3000;
server 192.168.100.10:3001;
}
# 提供转发的服务,即访问localhost:9001,会跳转至代理socket_proxy指定的转发地址
server {
listen 2200;
proxy_pass socket_proxy;
}
$ ab -n 10000 -c 200 http://192.168.10.1:2200/api
This is ApacheBench, Version 2.3 <$Revision: 1903618 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.10.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 192.168.10.1
Server Port: 2200
Document Path: /api
Document Length: 15 bytes
Concurrency Level: 200
Time taken for tests: 5.724 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 2170000 bytes
HTML transferred: 150000 bytes
Requests per second: 1746.90 [#/sec] (mean)
Time per request: 114.488 [ms] (mean)
Time per request: 0.572 [ms] (mean, across all concurrent requests)
Transfer rate: 370.19 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.7 0 7
Processing: 3 113 101.0 121 320
Waiting: 2 112 101.0 120 320
Total: 3 113 101.0 124 320
Percentage of the requests served within a certain time (ms)
50% 124
66% 192
75% 200
80% 211
90% 244
95% 255
98% 285
99% 305
100% 320 (longest request)
Nearly double performance improvement.
@zhang-wenchao Im also facing similar issue when i test load with 30 users, app getting 502.
Verify canary release
Provide environment information
Which area(s) of Next.js are affected? (leave empty if unsure)
Middleware / Edge (API routes, runtime)
Link to the code that reproduces this issue or a replay of the bug
npx create-next-app@latest
To Reproduce
use default config.
Describe the Bug
Very slow response speed and concurrency.
Expected Behavior
Real:
Estimate
Which browser are you using? (if relevant)
No response
How are you deploying your application? (if relevant)
next start