djc / bb8

Full-featured async (tokio-based) postgres connection pool (like r2d2)
MIT License
753 stars 110 forks source link

Performance of bb8 with tokio-postgres worse than r2d2 with postgres #29

Closed bikeshedder closed 5 years ago

bikeshedder commented 5 years ago

I wanted to go full async with the application I'm currently building and tried both bb8 and l337 with tokio-postgres just to find that both performed worse than r2d2 with postgres:

https://bitbucket.org/bikeshedder/actix_web_async_postgres/src/master/

Am I doing anything wrong or is this maybe an issue with tokio-postgres?

khuey commented 5 years ago

One useful statistic to know there would be how many postgres connections are actually used in each case. I suspect the async libraries are using far more connections than the blocking r2d2.

bikeshedder commented 5 years ago

As you can see in the code I use the same pool size for all three implementations:

const POOL_MIN_SIZE: u16 = 4;`
const POOL_MAX_SIZE: u16 = 16;`

They were not extracted into constants before. I just changed that to make it more obvious that the pools are configured exactly the same.

I also made sure to run htop with a filter and could see 48 (3x16) DB connections to PostgreSQL which are all evently utilized, depending on the test currently running.

khuey commented 5 years ago

On my machine, with a release build of your test binary, I get

Running 2m test @ http://localhost:8000/v1.0/event_list_l337
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.12ms  567.15us  29.99ms   78.09%
    Req/Sec     7.79k   164.50     8.97k    73.35%
  Latency Distribution
     50%    4.04ms
     75%    4.37ms
     90%    4.77ms
     99%    5.98ms
  3720626 requests in 2.00m, 1.49GB read
Requests/sec:  30994.28
Transfer/sec:     12.71MB
Running 2m test @ http://localhost:8000/v1.0/event_list_r2d2
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.65ms    2.41ms  50.40ms   88.78%
    Req/Sec     7.23k   316.71    27.38k    98.71%
  Latency Distribution
     50%    3.69ms
     75%    4.42ms
     90%    7.49ms
     99%   14.98ms
  3453682 requests in 2.00m, 1.38GB read
Requests/sec:  28756.74
Transfer/sec:     11.79MB
Running 2m test @ http://localhost:8000/v1.0/event_list_bb8
  4 threads and 128 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.96ms  322.86us  32.02ms   82.08%
    Req/Sec     6.48k    92.65     7.41k    92.40%
  Latency Distribution
     50%    4.92ms
     75%    5.09ms
     90%    5.28ms
     99%    5.81ms
  3096725 requests in 2.00m, 1.24GB read
Requests/sec:  25802.00
Transfer/sec:     10.58MB

which is more along the lines of what I would expect.

bikeshedder commented 5 years ago

I just updated r2d2 to the latest RC which also uses a newer version of (tokio-)postgres which resulted in a very measurable performance drop for the r2d2 implementation. It might all be related to this: https://github.com/sfackler/rust-postgres/issues/469

bikeshedder commented 5 years ago

It really seams to be an issue of the rust-postgres crate: https://github.com/sfackler/rust-postgres/issues/469#issuecomment-517764742