pantsel / kong-middleman-plugin

A Kong plugin that enables you to make an extra HTTP POST request before calling an API.
MIT License
156 stars 63 forks source link

Do not use LuaSocket #7

Closed thibaultcha closed 6 years ago

thibaultcha commented 6 years ago

Users of this plugin should be aware that it uses LuaSocket instead of ngx_lua's cosockets, which will be blocking and horribly hurt the performance of theirs Kong nodes. A simple test on a local instance shows that this plugin throttles Kong's performance to about 1req/s:

wrk -t 4 -c 80 -d 30s http://localhost:8000/here
Running 30s test @ http://localhost:8000/here
  4 threads and 80 connections
Thread Stats   Avg      Stdev     Max   +/- Stdev
  Latency     0.00us    0.00us   0.00us     nan%
  Req/Sec     0.71      1.89     5.00     85.71%
  37 requests in 30.10s, 12.76KB read                                                                                                                                                         
  Socket errors: connect 0, read 0, write 0, timeout 37
Requests/sec:      1.23
Transfer/sec:     433.92B
Sverik commented 6 years ago

Interestingly I cannot reproduce this issue. I have got two APIs defined, direct and validated. They call the same service. The difference is that validated has the middleman plugin activated that calls a static resource served by a lighttpd instance. Performance is affected when the plugin is activated, as expected, but it is nowhere near as terrible as 1 req/s. Here are my results (doesn't really matter if lighttpd responds with 200 OK or something different).

$ ./wrk -t 4 -c 80 -d 30s http://localhost:8011/direct
Running 30s test @ http://localhost:8011/direct
  4 threads and 80 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    41.44ms    3.29ms  82.13ms   98.22%
    Req/Sec   484.54     38.01   650.00     65.17%
  57976 requests in 30.06s, 21.84MB read
Requests/sec:   1928.64
Transfer/sec:    743.87KB
$ ./wrk -t 4 -c 80 -d 30s http://localhost:8011/validated
Running 30s test @ http://localhost:8011/validated
  4 threads and 80 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    52.33ms   36.80ms 535.28ms   96.99%
    Req/Sec   412.09     72.43   570.00     66.64%
  48749 requests in 30.04s, 18.37MB read
Requests/sec:   1622.71
Transfer/sec:    626.20KB
pantsel commented 6 years ago

@Sverik , nevertheless, @thibaultcha is right. It's not a big change tho. I just need to find some time to make it.

Sverik commented 6 years ago

@pantsel Sure thing, I just wanted to calm myself and other potential users down by running some tests to see if it's reallty that bad. I thought I'd share the results since I didn't experience the issue. Not denying it's still problematic, for sure.

thibaultcha commented 6 years ago

@Sverik Performance will still depend on the number of workers in use (a single one in my case) and the I/O delay introduced by the middleman upstream (which in your cases seems to be relatively fast).