Kong / kong

🦍 The Cloud-Native API Gateway and AI Gateway.
https://konghq.com/install/#kong-community
Apache License 2.0
39.28k stars 4.82k forks source link

[community feedback requested] Monitoring Kong, and API analytics #2543

Closed coopr closed 5 years ago

coopr commented 7 years ago

I'd like to hear from you, the Kong community, about how you think about monitoring Kong, and about the topic of API analytics.

maguec commented 7 years ago

1) Make the kong latency available in statsd 2) Make stats available for things like redis/db fetch times or time spent in any plugin 3) Use timers instead of gages where appropriate 4) Get my act together and write a riemann plugin

All that being said Kong is great and quite a life-saver

harryparmar commented 7 years ago
thenayr commented 7 years ago

What does "monitoring Kong" mean to you? Does "API analytics" mean something different to you?

Understanding the inbound requests to my different API's. This means knowing which API's are getting requested, what the status code is, what the response times are, and ideally also who the consumer is.

How do you currently monitor Kong? How do you gather API analytics, if that is something different for you?

I use a Prometheus Lua plugin to instrument every request to my API's. It looks a bit like this:

  init_by_lua_block {
      require 'resty.core'
      kong = require 'kong'
      kong.init()
      prometheus = require("prometheus").init("prometheus_metrics")
      metric_requests = prometheus:counter(
        "kong_http_requests_total", "Number of HTTP requests", {"upstream", "status", "username"})
      metric_latency = prometheus:histogram(
        "kong_http_request_duration_seconds", "HTTP request latency", {"upstream"})
  }

<====obfuscated====>

          log_by_lua_block {
              local consumer = ngx.req.get_headers()["X-Consumer-Username"]
              if not consumer then
                consumer = "anonymous"
              end
              metric_requests:inc(1, {ngx.var.proxyHost, ngx.var.status, consumer})
              metric_latency:observe(ngx.now() - ngx.req.start_time(), {ngx.var.proxyHost})
              kong.log()
          }

It provides me histogram summaries of all my API endpoints including the upstream information, the status code and the username (consumer) that accessed the API endpoint.

My prometheus cluster then hits a separate server block exposed in the config for the metrics endpoint and gets updated metrics every 10 seconds to expose on a dashboard.

Have you experienced any particular challenges with monitoring Kong? How did you overcome them?

Having to use a 3rd party Lua module means I need a totally custom Nginx config template, which kind of sucks.

What are your top three requests for improvements in Kong monitoring and API analytics, and why?

Support for Prometheus would honestly satisfy all of my needs. Second to that, a nice administrative dashboard for analytics could be nice also.

What are your favorite Kong monitoring, analytics, and logging plugins? Which plugins have you tried and found did not meet your needs - and why?

Again I use a 3rd party Prometheus plugin that isn't specific to Kong in any way, I've just adapted it to give me Kong related metrics. I tried the official statsd plugin first, but was not satisfied with the metrics presented. I much prefer histogram data to calculate quantiles on things like response times

Kong is built on NGINX - how do you think about monitoring one vs. the other? How much should Kong monitoring incorporate monitoring of NGINX?

I think Nginx provides much of the metrics and information I care about most (request totals, upstream information including response time, consumer information via Headers, etc.)

bungle commented 5 years ago

Thank you all for the feedback. Closing this now.