Closed coopr closed 5 years ago
Monitoring Kong means availability and monitoring the difference between the total latency and upstream latency. I only want to be notified if the Kong times increase not the upstream times.
Generally use the statsd plugin => prometheus statsd exporter
Logging Kong means a custom NGINX config with a custom JSON looking log format going into logstash
Recommend using JSON as the default logging format
Challenge was getting KPI stats per endpoint which ended up being scraping logs in logstash and sending to statsd (eg /v1/shoppingcart times scraped from logs and sent as a timer to statsd exporter)
Requests:
1) Make the kong latency available in statsd 2) Make stats available for things like redis/db fetch times or time spent in any plugin 3) Use timers instead of gages where appropriate 4) Get my act together and write a riemann plugin
All that being said Kong is great and quite a life-saver
What does "monitoring Kong" mean to you? Does "API analytics" mean something different to you?
Understanding the inbound requests to my different API's. This means knowing which API's are getting requested, what the status code is, what the response times are, and ideally also who the consumer is.
How do you currently monitor Kong? How do you gather API analytics, if that is something different for you?
I use a Prometheus Lua plugin to instrument every request to my API's. It looks a bit like this:
init_by_lua_block {
require 'resty.core'
kong = require 'kong'
kong.init()
prometheus = require("prometheus").init("prometheus_metrics")
metric_requests = prometheus:counter(
"kong_http_requests_total", "Number of HTTP requests", {"upstream", "status", "username"})
metric_latency = prometheus:histogram(
"kong_http_request_duration_seconds", "HTTP request latency", {"upstream"})
}
<====obfuscated====>
log_by_lua_block {
local consumer = ngx.req.get_headers()["X-Consumer-Username"]
if not consumer then
consumer = "anonymous"
end
metric_requests:inc(1, {ngx.var.proxyHost, ngx.var.status, consumer})
metric_latency:observe(ngx.now() - ngx.req.start_time(), {ngx.var.proxyHost})
kong.log()
}
It provides me histogram summaries of all my API endpoints including the upstream information, the status code and the username (consumer) that accessed the API endpoint.
My prometheus cluster then hits a separate server block exposed in the config for the metrics endpoint and gets updated metrics every 10 seconds to expose on a dashboard.
Have you experienced any particular challenges with monitoring Kong? How did you overcome them?
Having to use a 3rd party Lua module means I need a totally custom Nginx config template, which kind of sucks.
What are your top three requests for improvements in Kong monitoring and API analytics, and why?
Support for Prometheus would honestly satisfy all of my needs. Second to that, a nice administrative dashboard for analytics could be nice also.
What are your favorite Kong monitoring, analytics, and logging plugins? Which plugins have you tried and found did not meet your needs - and why?
Again I use a 3rd party Prometheus plugin that isn't specific to Kong in any way, I've just adapted it to give me Kong related metrics. I tried the official statsd plugin first, but was not satisfied with the metrics presented. I much prefer histogram data to calculate quantiles on things like response times
Kong is built on NGINX - how do you think about monitoring one vs. the other? How much should Kong monitoring incorporate monitoring of NGINX?
I think Nginx provides much of the metrics and information I care about most (request totals, upstream information including response time, consumer information via Headers, etc.)
Thank you all for the feedback. Closing this now.
I'd like to hear from you, the Kong community, about how you think about monitoring Kong, and about the topic of API analytics.