Closed caihonghaoCYF closed 3 months ago
How did you run pingap?
My test result:
cargo build --release
./target/release/pingap -c=~/github/pingap/conf/pingap.toml
wrk 'http://127.0.0.1:6188/stats' --latency
Running 10s test @ http://127.0.0.1:6188/stats
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 352.33us 515.47us 11.66ms 87.20%
Req/Sec 25.70k 1.82k 31.66k 87.00%
Latency Distribution
50% 161.00us
75% 201.00us
90% 1.12ms
99% 2.09ms
511642 requests in 10.01s, 142.37MB read
Requests/sec: 51116.40
Transfer/sec: 14.22MB
The remove access log:
# access_log = "tiny"
wrk 'http://127.0.0.1:6188/stats' --latency
Running 10s test @ http://127.0.0.1:6188/stats
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 92.45us 64.57us 4.25ms 90.65%
Req/Sec 54.60k 1.33k 55.98k 91.09%
Latency Distribution
50% 98.00us
75% 103.00us
90% 109.00us
99% 133.00us
1096366 requests in 10.10s, 305.29MB read
Requests/sec: 108558.52
Transfer/sec: 30.23MB
I followed your steps, but found that the effect is still not obvious...
machine mac m1 core:8
[root@10 pingap]# ./target/release/pingap -c=./conf/pingap.toml
[root@10 pingap]# wrk 'http://127.0.0.1:6188/stats' --latency Running 10s test @ http://127.0.0.1:6188/stats 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 3.80ms 8.73ms 120.91ms 96.52% Req/Sec 2.11k 0.91k 9.73k 82.91% Latency Distribution 50% 2.02ms 75% 3.31ms 90% 5.79ms 99% 51.94ms 41966 requests in 10.09s, 11.40MB read Requests/sec: 4159.49 Transfer/sec: 1.13MB
Try to remove the access log:
# access_log = "tiny"
Your nginx test result is Requests/sec: 15841.06
, and pingap is Requests/sec: 4159.49
. Please check the following differences:
Pingap threads config:
threads = 1
Pingap access log append to file, it should be start as daemon and set log path.
./target/release/pingap -c=~/github/pingap/conf/pingap.toml -d --log=/tmp/pingap.log
nginx has 4 worker processes, pingap is configured with 4 threads, wrk stress test results: nginx qps is about 4 times that of pingap. Is this because pingap is in a single process? How can I improve the performance of pingap so that it is closer to nginx
pingap config:
threads = 4
nginx.conf worker_processes 4;
events { use epoll; worker_connections 102400; }
wrk result:
[root@10 nginx]# wrk -c 50 -t 4 -d 30s http://127.0.0.1:6188/stats
Running 30s test @ http://127.0.0.1:6188/stats
4 threads and 50 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.71ms 2.60ms 36.04ms 85.68%
Req/Sec 5.04k 1.32k 8.91k 66.44%
602373 requests in 30.10s, 164.19MB read
Requests/sec: 20014.91
Transfer/sec: 5.46MB
[root@10 nginx]# wrk -c 50 -t 4 -d 30s http://127.0.0.1:9080/stats
Running 30s test @ http://127.0.0.1:9080/stats
4 threads and 50 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.49ms 5.32ms 112.71ms 96.24%
Req/Sec 22.22k 11.82k 59.62k 64.82%
2643045 requests in 30.21s, 415.90MB read
Requests/sec: 87477.79
Transfer/sec: 13.77MB
Is it possible to provide all TOML configuration? Pingora may be slower than nginx, but not too much.
https://github.com/cloudflare/pingora/issues/143#issuecomment-2009001359
error_template = ""
pid_file = "/tmp/pingap.pid"
upgrade_sock = "/tmp/pingap_upgrade.sock"
threads = 4
work_stealing = true
grace_period = "3m"
graceful_shutdown_timeout = "10s"
log_level = "info"
[upstreams.charts]
addrs = ["127.0.0.1:9080"]
algo = "hash:cookie"
health_check = "http://charts/ping?connection_timeout=3s&pingap"
connection_timeout = "10s"
total_connection_timeout = "30s"
read_timeout = "10s"
write_timeout = "10s"
idle_timeout = "120s"
[upstreams.diving]
addrs = ["127.0.0.1:5001"]
[locations.lo]
upstream = "charts"
host = ""
path = "/"
proxy_headers = ["name:value"]
headers = ["name:value"]
rewrite = ""
proxy_plugins = ["pingap:requestId", "pingap:stats"]
[servers.test]
addr = "0.0.0.0:6188"
locations = ["lo"]
[proxy_plugins.stats]
category = 0
value = "/stats"
My test result, CPU(M2):
wrk -c 50 -t 4 -d 30s http://127.0.0.1:6188/stats
Running 30s test @ http://127.0.0.1:6188/stats
4 threads and 50 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 187.55us 82.12us 5.32ms 76.15%
Req/Sec 58.11k 3.24k 61.26k 90.53%
6961745 requests in 30.10s, 1.99GB read
Requests/sec: 231283.35
Transfer/sec: 67.68MB
Change threads to 1:
wrk -c 50 -t 4 -d 30s http://127.0.0.1:6188/stats
Running 30s test @ http://127.0.0.1:6188/stats
4 threads and 50 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 431.28us 86.60us 4.51ms 95.15%
Req/Sec 27.92k 679.86 30.67k 91.61%
3343931 requests in 30.10s, 0.96GB read
Requests/sec: 111091.68
Transfer/sec: 32.49MB
Would you give me the nginx's config, I will take a test result of nginx.
nginx config
worker_processes 4;
events {
use epoll;
worker_connections 102400;
}
http {
include mime.types;
default_type application/octet-stream;
log_format access 'welb - $remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" "request_time:{$request_time}" '
'"$http_user_agent" $upstream_response_time';
sendfile on;
keepalive_timeout 65;
keepalive_requests 1000000;
server {
listen 7080;
server_name localhost;
access_log /usr/local/openresty/nginx/logs/access.log access;
location / {
default_type text/json;
proxy_pass http://127.0.0.1:9080;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
server {
listen 9080;
server_name localhost;
access_log /usr/local/openresty/nginx/logs/access.log access;
location / {
default_type text/json;
return 200 "9080 success";
}
}
}
I will show my test result later. By the way, the stats of pingap is not a each response, it gets the memory of process, get accepted and processing request, and convert struct to json.
My nginx test result:
wrk -c 50 -t 4 -d 30s http://127.0.0.1:9080/stats
Running 30s test @ http://127.0.0.1:9080/stats
4 threads and 50 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 687.80us 2.64ms 36.23ms 96.00%
Req/Sec 56.50k 11.99k 82.21k 67.75%
6747608 requests in 30.01s, 1.00GB read
Requests/sec: 224826.83
Transfer/sec: 34.09MB
Because I make a test on macos, so I change use epoll;
to use kqueue;
.
Does your pingap and nginx test result on the same machine? Please show me you cpu.
use centos8-stream machine
[root@10 nginx]# uname -a
Linux 10.0.3.15 4.18.0-552.el8.x86_64 #1 SMP Sun Apr 7 19:39:51 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
[root@10 nginx]# cat /proc/cpuinfo | grep "cores"|uniq
cpu cores : 4
Simple response ping -> pong.
Without access log:
wrk -c 50 -t 4 -d 30s http://127.0.0.1:6188/ping
Running 30s test @ http://127.0.0.1:6188/ping
4 threads and 50 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 185.94us 127.34us 10.46ms 91.66%
Req/Sec 60.49k 5.67k 65.24k 85.88%
7246199 requests in 30.10s, 0.92GB read
Requests/sec: 240732.73
Transfer/sec: 31.45MB
With access log:
wrk -c 50 -t 4 -d 30s http://127.0.0.1:6188/ping
Running 30s test @ http://127.0.0.1:6188/ping
4 threads and 50 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 389.27us 0.93ms 25.32ms 98.98%
Req/Sec 37.88k 3.02k 40.98k 83.47%
4537826 requests in 30.10s, 592.88MB read
Requests/sec: 150747.70
Transfer/sec: 19.70MB
Maybe writing files is slow.
Pingora append log without buffer: https://github.com/cloudflare/pingora/blob/main/pingora-core/src/server/daemon.rs#L65.
I use a buffer writer for append log like this:
let file = OpenOptions::new()
.append(true)
.create(true)
// open read() in case there are no readers
// available otherwise we will panic with
// an ENXIO since O_NONBLOCK is set
.read(true)
.open("/tmp/pingap.log")
.unwrap();
builder.target(env_logger::Target::Pipe(Box::new(BufWriter::new(file))));
The test result:
wrk -c 50 -t 4 -d 30s http://127.0.0.1:6188/ping
Running 30s test @ http://127.0.0.1:6188/ping
4 threads and 50 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 299.18us 387.52us 18.34ms 99.21%
Req/Sec 42.73k 2.94k 45.76k 87.54%
5118981 requests in 30.10s, 668.81MB read
Requests/sec: 170047.24
Transfer/sec: 22.22MB
run cargo build --release
but got this error:
error: #[derive(RustEmbed)] folder '/Users/zxbilly/Desktop/pingap/dist/' does not exist. cwd: '/Users/zxbilly/Desktop/pingap'
--> src/plugin/admin.rs:49:1
|
49 | / #[folder = "dist/"]
50 | | struct AdminAsset;
| |__________________^
error[E0599]: no function or associated item named `get` found for struct `AdminAsset` in the current scope
--> src/plugin/admin.rs:411:44
|
50 | struct AdminAsset;
| ----------------- function or associated item `get` not found for this struct
...
411 | EmbeddedStaticFile(AdminAsset::get(file), Duration::from_secs(365 * 24 * 3600)).into()
| ^^^ function or associated item not found in `AdminAsset`
|
= help: items from traits can only be used if the trait is implemented and in scope
= note: the following traits define an item `get`, perhaps you need to implement one of them:
candidate #1: `SliceIndex`
candidate #2: `rustls::server::server_conn::StoresServerSessions`
candidate #3: `prometheus::atomic64::Atomic`
candidate #4: `protobuf::reflect::repeated::ReflectRepeated`
candidate #5: `protobuf::reflect::repeated::ReflectRepeatedEnum`
candidate #6: `protobuf::reflect::repeated::ReflectRepeatedMessage`
candidate #7: `nix::sys::socket::GetSockOpt`
candidate #8: `rustracing::carrier::TextMap`
candidate #9: `toml_edit::table::TableLike`
candidate #10: `tonic::metadata::map::as_metadata_key::Sealed`
candidate #11: `Embed`
For more information about this error, try `rustc --explain E0599`.
error: could not compile `pingap` (lib) due to 2 previous errors
Run make build-web
to generate the admin asset.
This question has been stale for a week. It will be closed in an additional day if not updated.
@caihonghaoCYF I also did my benchmarks. In the same environment, the latest nginx handled 450k req in 60 seconds and pingap 330k. The difference is quite significant.
@vicanso When can we expect tinyufo to be implemented in pingap?
The base implementation mainly depends on pingora, and TinyUFO may be used later.
https://github.com/cloudflare/pingora/issues/212#issuecomment-2067694782
This question has been stale for a week. It will be closed in an additional day if not updated.
This issue has been closed because it has been stalled with no activity.
Hello, I used wrk to pressure test pingap and found that tps was very low. Similarly, the tps of nginx was normal
-------------------------------------------------pingap---------------------------------------------------------------
[root@10 wrk]# ./wrk 'http://127.0.0.1:6188/stats' --latency Running 10s test @ http://127.0.0.1:6188/stats 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 15.31ms 5.37ms 45.98ms 72.94% Req/Sec 329.17 81.96 555.00 72.00% Latency Distribution 50% 15.85ms 75% 18.38ms 90% 21.08ms 99% 32.13ms 6589 requests in 10.07s, 1.04MB read Requests/sec: 654.24 Transfer/sec: 105.42KB
-------------------------------------------------nginx--------------------------------------------------------------- [root@10 wrk]# ./wrk 'http://127.0.0.1:9080/stats' --latency Running 10s test @ http://127.0.0.1:9080/stats 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 733.10us 0.89ms 17.91ms 91.71% Req/Sec 7.97k 2.30k 16.56k 78.50% Latency Distribution 50% 528.00us 75% 840.00us 90% 1.44ms 99% 4.54ms 158591 requests in 10.01s, 24.95MB read Requests/sec: 15841.06 Transfer/sec: 2.49MB