hapijs / hapi-pino

🌲 Hapi plugin for the Pino logger
MIT License
148 stars 61 forks source link

Can this be implemented as a Good stream ? #11

Closed yonjah closed 7 years ago

yonjah commented 7 years ago

What the advantage of having an independent Hapi plugin and not relying on Good ?

Having pino as a Good compatible stream have a few clear advantages (like getting filtering and any other Good preprocessors to work out of the box). There might be some performance implications but I can assume that for a very basic pipeline it should be negligible.

It would be great if the benchmark code that used to compare this to good-console be published and we could test it vs a good-pino version.

mcollina commented 7 years ago

This is what I have been using https://github.com/pinojs/hapi-pino/blob/master/example-good.js. Redirecting it to /dev/null, and launching an http load test with autocannon or wrk.

My initial assessment on all those filters is that their impact is far from negligible. Streams pipelines, specifically, introduce serious overhead.

As anything performance-related, we will be super happy if good (and hapi) becames faster, and we will be happy to fold hapi-pino into good-pino if there are zero performance slowdowns. My experience dealing with loggers is that it has too many features and extensibility to be fast. I might be wrong, so report your numbers.

In the whole line of pino integrations, our goal is to provide the fastest way to log data. We have seen that a lot of integrations for other logging libraries are not written with performance as a first level concern.

(I have nothing against streams, I maintain them)

yonjah commented 7 years ago

I've created a very simple implementation of hapi-pino as good stream https://github.com/pinojs/hapi-pino/pull/12 I'm not sure if all the events data is properly logged but this is enough for benchmarks.

since the results were really close I ran autocannon -c 50 -d 30 localhost:3000 twice without restarting the example server and got the following results -

#good-pino
Running 30s test @ http://localhost:3000
50 connections

Stat         Avg       Stdev   Max
Latency (ms) 45.14     10.58   292
Req/Sec      1093.07   92.62   1160
Bytes/Sec    241.94 kB 19.9 kB 262.14 kB

33k requests in 30s, 7.25 MB read

Running 30s test @ http://localhost:3000
50 connections

Stat         Avg       Stdev    Max
Latency (ms) 46.51     7.71     126
Req/Sec      1061.67   75.92    1139
Bytes/Sec    234.84 kB 17.06 kB 253.95 kB

32k requests in 30s, 7.04 MB read
#hapi-pino
Running 30s test @ http://localhost:3000
50 connections

Stat         Avg      Stdev    Max
Latency (ms) 47.96    25.71    648
Req/Sec      1030.2   158.07   1182
Bytes/Sec    227.4 kB 35.21 kB 262.14 kB

31k requests in 30s, 6.83 MB read

Running 30s test @ http://localhost:3000
50 connections

Stat         Avg       Stdev    Max
Latency (ms) 47.83     8.84     134
Req/Sec      1032.8    79.94    1111
Bytes/Sec    228.56 kB 17.39 kB 245.76 kB

31k requests in 30s, 6.85 MB read
#good-console
Running 30s test @ http://localhost:3000
50 connections

Stat         Avg       Stdev    Max
Latency (ms) 45.68     10.91    269
Req/Sec      1080.94   101.75   1164
Bytes/Sec    238.32 kB 22.98 kB 262.14 kB

32k requests in 30s, 7.17 MB read

Running 30s test @ http://localhost:3000
50 connections

Stat         Avg       Stdev    Max
Latency (ms) 49.23     10.88    201
Req/Sec      1003.97   111.71   1101
Bytes/Sec    222.28 kB 24.35 kB 245.76 kB

30k requests in 30s, 6.66 MB read

good-pino results seem to be a bit better but I'm not sure if there is any significant improvement even when compared with good-console.

mcollina commented 7 years ago

Closing as we will not be pursuing this direction (see the discussion in #12).