cloudflare / pingora

A library for building fast, reliable and evolvable network services.
Apache License 2.0
20.3k stars 1.1k forks source link

Why is it that a CPU running at full capacity only has less than 2000RPS when I test throughput on Linux x86 system? #143

Closed dosens closed 3 months ago

eaufavor commented 3 months ago

What is your test setup? Benchmark numbers can vary a lot depending on your test setup.

dosens commented 3 months ago

What is your test setup? Benchmark numbers can vary a lot depending on your test setup.

I am not sure if the following information is sufficient. 1、The source code of load_balance is copied from the quick_start.md file;

2、Pingora uses a yaml configuration file as follows: version: 1 threads: 1 pid_file: /tmp/load_balancer.pid error_log: /tmp/load_balancer_err.log upgrade_sock: /tmp/load_balancer.sock upstream_keepalive_pool_size: 10240

3、The backend is an nginx server with a performance of up to 80000 RPS

styrowolf commented 3 months ago

Are you sure that you're building the program in release mode?

dosens commented 3 months ago

Are you sure that you're building the program in release mode?

Oh, I understand. I used to compile in debug mode, thank you for the reminder!

dosens commented 3 months ago

Are you sure that you're building the program in release mode?

Oh, I understand. I used to compile in debug mode, thank you for the reminder!

I tested using release mode and found that one CPU has 2.2W RPS. Does this meet expectations?

bestgopher commented 3 months ago

What is your test setup? Benchmark numbers can vary a lot depending on your test setup.

I am not sure if the following information is sufficient. 1、The source code of load_balance is copied from the quick_start.md file;

2、Pingora uses a yaml configuration file as follows: version: 1 threads: 1 pid_file: /tmp/load_balancer.pid error_log: /tmp/load_balancer_err.log upgrade_sock: /tmp/load_balancer.sock upstream_keepalive_pool_size: 10240

3、The backend is an nginx server with a performance of up to 80000 RPS

threads: 1 means single thread?

eaufavor commented 3 months ago

I tested using release mode and found that one CPU has 2.2W RPS. Does this meet expectations?

The raw RPS highly depends on your hardware/OS. To compare, try setting up a nginx/enovy proxy and run your benchmark on the same machine.

BTW I saw from https://github.com/envoyproxy/envoy/issues/19103 that envoy has RPS in the same order of magnitude (hardware spec undisclosed).

dosens commented 3 months ago

What is your test setup? Benchmark numbers can vary a lot depending on your test setup.

I am not sure if the following information is sufficient. 1、The source code of load_balance is copied from the quick_start.md file; 2、Pingora uses a yaml configuration file as follows: version: 1 threads: 1 pid_file: /tmp/load_balancer.pid error_log: /tmp/load_balancer_err.log upgrade_sock: /tmp/load_balancer.sock upstream_keepalive_pool_size: 10240 3、The backend is an nginx server with a performance of up to 80000 RPS

threads: 1 means single thread?

yes.

dosens commented 3 months ago

I tested multiple threads and one thread to obtain RPS throughput, which is not a multiple relationship. Does it meet expectations?

1 thread - Pingora echo http server - 7,9000 RPS 2 thread - Pingora echo http server - 12,4000 RPS 3 thread - Pingora echo http server - 17,0000 RPS

1 worker - nginx http server - 9,8000 RPS 2 worker - nginx http server - 18,1000 RPS 3 worker - nginx http server - 22,1000 RPS

The Pingora echo HTTP server and nginx http server work on the same operating system, using the same binding core strategy.

eaufavor commented 3 months ago

Does it meet expectations?

These numbers looks fine to me. Please be mindful that the performance numbers on these synthetic benchmarks are not the direct indicators of real world performance.

github-actions[bot] commented 3 months ago

This question has been stale for a week. It will be closed in an additional day if not updated.

github-actions[bot] commented 3 months ago

This issue has been closed because it has been stalled with no activity.