Closed PierreBesson closed 8 years ago
It looks really promising
It should be discussed if my approach of using logstash-logback-encoder is better than elastic/elasticsearch-metrics-reporter-java. Both collect the same dropwizard metrics. My solution has the advantage that it is using the same socket to send logs and metrics. Also it is already enriching the logs to add the appname, instanceId, host and port.
So for me this looks like a very good solution.
+1
@PierreBesson is your logstash conf only available for micro services ? Because it is certainly interesting to have the monitoring also for monoliths.
Kirbana looks cool so +1 from me On 5 Mar 2016 17:03, "Christophe Bornet" notifications@github.com wrote:
@PierreBesson https://github.com/PierreBesson is your logstash conf only available for micro services ? Because it is certainly interesting to have the monitoring also for monoliths.
— Reply to this email directly or view it on GitHub https://github.com/jhipster/generator-jhipster/issues/3082#issuecomment-192609627 .
@cbornet I initially developed it for micro-services so it is currently only available for micro-service and gateway apps but I will enable it for all apps if @jdubois agrees.
So lets go full ELK. Also when JH3 will be released I plan to do an article to explain how to set up your apps for centralized logging and metrics.
+1 for going full ELK Is it a lot of work to have it generated for the monolith? For me it's less important, but if that's easy let's do it.
@jdubois it is really easy, actually just removing some <% if microservice %> conditions.
OK let's do it then!
Sorry, maybe too little late. I use RELK (R for redis) on my JHipster project with docker. So this is the configuration:
+---------------------+ +---------------------+
| | | |
| Production app | | Test app |
| | | |
+----------+----------+ +----------+----------+
| |
+----------+----------+ +----------+----------+
| | | |
| Logstash | | Logstash |
| | | |
+----------+----------+ +----------+----------+
| |
| |
Internet
|
+----------+----------+
| |
| Redis |
| |
+----------+----------+
|
+----------+----------+
| |
| Logstash |
| |
+----------+----------+
|
+----------+----------+
| |
| Elastic |
| |
+----------+----------+
|
+----------+----------+
| |
| Kibana |
| |
+---------------------+
The Logstash behind applications only read the files in the log directory and tag the log to know where they come from. It's not parse the log for elastic it send raw data to the Redis. Redis is used for queuing, the Logstash behind parse the log to elastic.
Our configuration is operational since 2 month and it's work great.
If you need Logstash configuration, Docker or Kibana dashboard tell me!
@moifort thanks a lot for sharing your monitoring conf. It is really interesting for me to see how these kind of tools are used. The Redis queuing part is interesting but I am not sure if we would need it.
Initially, I was going to do it like you with a logstash per app reading the log files but then on the advice of @jdubois, I investigated how Papertrail (a popular log monitoring SaaS product) does to retrieve log and they advice to use a Syslog appender for logback to send the logs through a UDP socket. Then I found that there exists a specific logback-logstash-appender that forward logs as a JSON that can be read by logstash with no parsing. However I have not measured if using this appender will impact performance. At least I am wrapping the appender in a logback Async appender that would start to drop logs if the number of logs in the queue is too high.
I have not yet begun to work on dashboards. What are the most useful dashboards for you ? Also do you have favorite Kibana plugins that were useful to you ?
@PierreBesson I am not mature enough and we have a very low activity on the application. But the most interesting part come from metric logs. We use it for analyzing the number of request and the time spend by method with @Timed
annotation. But the thing is, you will never find a perfect dashboard, you need to create your dashboard according to your needs.
The first image is the application dashboard with the hit logs stacked by level. And other 2 graph using metrics log showing the top 15 request that take the most time and below the number of call per minute. I think he can be a good start for a init JHipster dashboard.
The second image is the metric of the JVM, this one can be add by default too.
To answering to your second paragraph, this is my feedbacks:
Thanks a lot for your answers. I will try to do those dashboards.
Your approach does seem indeed to have many advantages ! For now we are going to stay with our current approach as it is really easy to use but I will investigate your approach.
I think that it might not be too complicated to write some config to setup a file appender and a "local logstash" to read the file and forward the logs. Except maybe we could use filebeat instead of logstash as it might be faster. The filebeat docker container could then be quickly launched with a docker-compose that could be extended by app.dev.yml.
For queuing with redis, I will try to add it as an option in hipster-labs/generator-jhipster-docker-compose so that you can have you elk.yml with or without redis as you want. [EDIT] It seems that it would not be so easy as redis can not received data in the same format as logstash.
@moifort your "UDP reconnection" problem is strange considering there is no connection with UDP ... This should be tested with the logback-logstash-appender : restart ELK and see if the app still sends its logs.
@cbornet, @moifort. I just tried to stop and restart the elk-elasticsearch and elk-logstash container several times and luckily the logs have been able to be collected just fine :relieved:. But after looking at logstash-logback-encoder's issue tracker there have been some problems in the past especially with the TCP appender.
@PierreBesson it seems the values are imported by logstash as strings (eg. "p999": "0.0") where they should be integers so that we can graph them. Do you have this issue ?
@cbornet I had not thought about this. That might explain why I had trouble plotting things... I guess we should alter a bit the logstash.conf and it will convert them to numbers.
Yes. Use mutate + convert
@cbornet would this solution this solution suits you. The alternative would be to use "mutate" and harcode all fields that should be converted to numbers.
Since we know the fields in advance, I believe hardcoded field would be more performant.
Here is my conf
filter {
if [logger_name] =~ "metrics" {
grok {
match => { "message" => "(?<data>(.*))" }
}
kv {
source => "data"
field_split => ", "
}
mutate {
convert => { "value" => "float" }
convert => { "count" => "integer" }
convert => { "min" => "float" }
convert => { "max" => "float" }
convert => { "mean" => "float" }
convert => { "stddev" => "float" }
convert => { "median" => "float" }
convert => { "p75" => "float" }
convert => { "p95" => "float" }
convert => { "p98" => "float" }
convert => { "p99" => "float" }
convert => { "p999" => "float" }
convert => { "mean_rate" => "float" }
convert => { "m1" => "float" }
convert => { "m5" => "float" }
convert => { "m15" => "float" }
}
}
}
and now I can graph all metrics in kib :smile:
I have played a little with the metrics and I have an issue : as fields have the same name (eg. value), I couldn't draw them on the same graph (eg. jvm.memory.heap.used's value together with jvm.memory.non-heap.used's value) So I think it would be better to have the metric name instead of value as JSON key. @moifort since your charts show different metrics plotted on the same chart, could you share with us the format of your JSON ?
Or maybe just prefix the fields name with the metric name
Yes sure! just in case all my fields are prefix with metrics_. Here the JSON file for JVM metrics.
[
{
"_id": "JVM-Mémoire",
"_type": "visualization",
"_source": {
"title": "JVM - Mémoire",
"visState": "{\"type\":\"line\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"showCircles\":true,\"smoothLines\":true,\"interpolate\":\"linear\",\"scale\":\"linear\",\"drawLinesBetweenPoints\":true,\"radiusRatio\":9,\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":true,\"setYExtents\":false,\"yAxis\":{},\"spyPerPage\":10},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"metrics_value\",\"json\":\"\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"metrics_name.raw\",\"size\":100,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"spy\":{\"mode\":{\"name\":null,\"fill\":false}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-*\",\"query\":{\"query_string\":{\"query\":\"metrics_name : jvm.memory.total*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "JVM-Thread",
"_type": "visualization",
"_source": {
"title": "JVM - Thread",
"visState": "{\"type\":\"line\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"showCircles\":true,\"smoothLines\":true,\"interpolate\":\"linear\",\"scale\":\"linear\",\"drawLinesBetweenPoints\":true,\"radiusRatio\":9,\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":true,\"setYExtents\":false,\"yAxis\":{},\"spyPerPage\":10},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"metrics_value\",\"json\":\"\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"custom\",\"customInterval\":\"1m\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"metrics_name.raw\",\"size\":100,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"spy\":{\"mode\":{\"name\":null,\"fill\":false}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-*\",\"query\":{\"query_string\":{\"query\":\"metrics_name : jvm.threads.*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "JVM-Mémoire-non-heap",
"_type": "visualization",
"_source": {
"title": "JVM - Mémoire non-heap",
"visState": "{\"type\":\"line\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"showCircles\":true,\"smoothLines\":true,\"interpolate\":\"linear\",\"scale\":\"linear\",\"drawLinesBetweenPoints\":true,\"radiusRatio\":9,\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":true,\"setYExtents\":false,\"yAxis\":{},\"spyPerPage\":10},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"metrics_value\",\"json\":\"\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"metrics_name.raw\",\"size\":100,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"spy\":{\"mode\":{\"name\":null,\"fill\":false}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-*\",\"query\":{\"query_string\":{\"query\":\"metrics_name : jvm.memory.non-heap.*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "JVM-Mémoire-heap",
"_type": "visualization",
"_source": {
"title": "JVM - Mémoire heap",
"visState": "{\"type\":\"line\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"showCircles\":true,\"smoothLines\":true,\"interpolate\":\"linear\",\"scale\":\"linear\",\"drawLinesBetweenPoints\":true,\"radiusRatio\":9,\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":true,\"setYExtents\":false,\"yAxis\":{},\"spyPerPage\":10},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"metrics_value\",\"json\":\"\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"metrics_name.raw\",\"size\":100,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"spy\":{\"mode\":{\"name\":null,\"fill\":false}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-*\",\"query\":{\"query_string\":{\"query\":\"metrics_name : jvm.memory.heap.*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
}
]
[
{
"_id": "Métrics-JVM",
"_type": "dashboard",
"_source": {
"title": "Métrics - JVM",
"hits": 0,
"description": "",
"panelsJSON": "[{\"id\":\"JVM-Mémoire\",\"type\":\"visualization\",\"panelIndex\":1,\"size_x\":12,\"size_y\":3,\"col\":1,\"row\":1},{\"id\":\"JVM-Mémoire-heap\",\"type\":\"visualization\",\"panelIndex\":2,\"size_x\":12,\"size_y\":3,\"col\":1,\"row\":4},{\"id\":\"JVM-Mémoire-non-heap\",\"type\":\"visualization\",\"panelIndex\":3,\"size_x\":12,\"size_y\":3,\"col\":1,\"row\":7},{\"id\":\"JVM-Thread\",\"type\":\"visualization\",\"panelIndex\":4,\"size_x\":12,\"size_y\":3,\"col\":1,\"row\":10}]",
"optionsJSON": "{\"darkTheme\":true}",
"uiStateJSON": "{\"P-1\":{\"spy\":{\"mode\":{\"name\":null,\"fill\":false}}}}",
"version": 1,
"timeRestore": false,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[{\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}}}]}"
}
}
}
]
@cbornet I've tried your conf, I checked and it seems that you didn't forget any field, so I added your logstash config to generator-jhipster-docker-compose. So it will be available by default. We will also have logging/metrics automatically enabled for all apps that register themselves to the config server at startup (see this).
@cbornet do you want the logs the application?
@moifort yes, a metric log in JSON format
I could do it with the current logstash conf and the timelion plugin ! Expression :
.es(jvm.memory.heap.used, metric='avg:value'), .es(jvm.memory.non-heap.used, metric='avg:value'), .es(jvm.memory.total.used, metric='avg:value'), .es(jvm.memory.total.max, metric='avg:value').color(#ff0000)
An issue is that timelion doesn't seem to care about the dark theme...
@cbornet
[
{
"_id": "Alantaya-Logs-slash-Temps-[ALL]",
"_type": "visualization",
"_source": {
"title": "Alantaya - Logs/Temps [ALL]",
"visState": "{\"type\":\"histogram\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"scale\":\"linear\",\"mode\":\"stacked\",\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":false,\"setYExtents\":false,\"yAxis\":{}},\"aggs\":[{\"id\":\"1\",\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}}],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"savedSearchId": "Alantaya-stacks",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[]}"
}
}
}
]
[
{
"_id": "Alantaya-Max-request-m1,-top-15-time-request-(without-also-and-test)",
"_type": "visualization",
"_source": {
"title": "Alantaya - Max request/second m1, top 15 time request (without also and test)",
"visState": "{\n \"type\": \"line\",\n \"params\": {\n \"shareYAxis\": true,\n \"addTooltip\": true,\n \"addLegend\": true,\n \"showCircles\": true,\n \"smoothLines\": true,\n \"interpolate\": \"linear\",\n \"scale\": \"linear\",\n \"drawLinesBetweenPoints\": true,\n \"radiusRatio\": 9,\n \"times\": [],\n \"addTimeMarker\": false,\n \"defaultYExtents\": true,\n \"setYExtents\": false,\n \"yAxis\": {}\n },\n \"aggs\": [\n {\n \"id\": \"2\",\n \"type\": \"date_histogram\",\n \"schema\": \"segment\",\n \"params\": {\n \"field\": \"@timestamp\",\n \"interval\": \"auto\",\n \"customInterval\": \"2h\",\n \"min_doc_count\": 1,\n \"extended_bounds\": {}\n }\n },\n {\n \"id\": \"4\",\n \"type\": \"max\",\n \"schema\": \"metric\",\n \"params\": {\n \"field\": \"metrics_m1\"\n }\n },\n {\n \"id\": \"5\",\n \"type\": \"terms\",\n \"schema\": \"group\",\n \"params\": {\n \"field\": \"metrics_name.raw\",\n \"size\": 15,\n \"order\": \"desc\",\n \"orderBy\": \"4\"\n }\n }\n ],\n \"listeners\": {}\n}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[]}"
}
}
},
{
"_id": "Alantaya-Metrics-top-15-time-request-(without-algo)",
"_type": "visualization",
"_source": {
"title": "Alantaya - Metrics top 15 time request (without algo)",
"visState": "{\"type\":\"line\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"showCircles\":true,\"smoothLines\":true,\"interpolate\":\"linear\",\"scale\":\"linear\",\"drawLinesBetweenPoints\":true,\"radiusRatio\":9,\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":true,\"setYExtents\":false,\"yAxis\":{}},\"aggs\":[{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"4\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"metrics_mean\"}},{\"id\":\"5\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"metrics_name.raw\",\"size\":15,\"order\":\"desc\",\"orderBy\":\"4\"}}],\"listeners\":{}}",
"uiStateJSON": "{}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"filter\":[]}"
}
}
}
]
@moifort I was more speaking of the log itself. Currently we have:
{
"_index": "logstash-2016.03.10",
"_type": "GAUGE",
"_id": "AVNgOC9XreRZ2MCT8fpA",
"_score": null,
"_source": {
"@timestamp": "2016-03-10T11:10:07.936Z",
"@version": 1,
"message": "type=GAUGE, name=HikariPool-0.pool.ActiveConnections, value=0",
"logger_name": "com.mycompany.myapp.metrics",
"thread_name": "metrics-logger-reporter-2-thread-1",
"level": "INFO",
"level_value": 20000,
"HOSTNAME": "vagrant-ubuntu-trusty-64",
"app_name": "myapp",
"app_port": "8082",
"type": "GAUGE",
"host": "172.18.0.1",
"data": "type=GAUGE, name=HikariPool-0.pool.ActiveConnections, value=0",
"name": "HikariPool-0.pool.ActiveConnections",
"value": "0"
},
"fields": {
"@timestamp": [
1457608207936
]
},
"sort": [
1457608207936
]
}
But it seems not good for Kibana. I'm tryng to have something more like
{
"HikariPool-0.pool.ActiveConnections": {
"value": 0
}
}
Almost there :smile: Keep you posted
Oh sorry, for metric I dont use logstash appender
, we write the metrics on a specific log file and we parse it with logstash. Here my conf it's he can help you:
if [type] == "metrics" {
grok {
match => { "message" => "(?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}) \[(?<logthread>(?:[a-z]*))\] (?<data>(.*))" }
}
date {
# timezone => "Europe/Paris"
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
}
kv {
source => "data"
field_split => ", "
prefix => "metrics_"
}
mutate {
convert => {
"metrics_p999" => "float"
"metrics_count" => "integer"
"metrics_type" => "string"
"metrics_name" => "string"
"metrics_p98" => "float"
"metrics_min" => "float"
"metrics_median" => "float"
"metrics_mean" => "float"
"metrics_max" => "float"
"metrics_m1" => "float"
"metrics_duration_unit" => "string"
"metrics_p75" => "float"
"metrics_p99" => "float"
"metrics_rate_unit" => "string"
"metrics_stddev" => "float"
"metrics_p95" => "float"
"metrics_mean_rate" => "float"
"metrics_m5" => "float"
"metrics_m15" => "float"
"metrics_value" => "float"
}
}
}
@moifort unless I am mistaken your logstash conf doesn't permit to mix jvm.memory.heap.* with jvm.memory.non-heap.* on the same graph. Am I correct ?
Wow guys ! Excellent work. I have a surprise for you: a new sub project: JHipster Monitor based on Kibana. It loads dashboards automatically hipster-labs/jhipster-monitor. Also Kibana config will be able to be customized in the future... I have also pre-installed Timelion as this plugin seems to be popular. For elasticsearch and logstash I use the default image as is but I mount a volume to persist elasticsearch data.
:clap: I will PR on this then
@cbornet I dont know if I understand you, because my logstash configuration does not filter my on value, because the jvm.memory.heap
or jvm.memory.non-heap
are defined in metrics_name
field. I send all metrics to my elastic.
It's in my Kibana that I make the filter. You can import my visualize conf, hope I answer to your question.
[
{
"_id": "JVM-Mémoire",
"_type": "visualization",
"_source": {
"title": "JVM - Mémoire",
"visState": "{\"type\":\"line\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"showCircles\":true,\"smoothLines\":true,\"interpolate\":\"linear\",\"scale\":\"linear\",\"drawLinesBetweenPoints\":true,\"radiusRatio\":9,\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":true,\"setYExtents\":false,\"yAxis\":{},\"spyPerPage\":10},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"metrics_value\",\"json\":\"\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"metrics_name.raw\",\"size\":100,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"spy\":{\"mode\":{\"name\":null,\"fill\":false}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-*\",\"query\":{\"query_string\":{\"query\":\"metrics_name : jvm.memory.total*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "JVM-Thread",
"_type": "visualization",
"_source": {
"title": "JVM - Thread",
"visState": "{\"type\":\"line\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"showCircles\":true,\"smoothLines\":true,\"interpolate\":\"linear\",\"scale\":\"linear\",\"drawLinesBetweenPoints\":true,\"radiusRatio\":9,\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":true,\"setYExtents\":false,\"yAxis\":{},\"spyPerPage\":10},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"metrics_value\",\"json\":\"\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"custom\",\"customInterval\":\"1m\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"metrics_name.raw\",\"size\":100,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"spy\":{\"mode\":{\"name\":null,\"fill\":false}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-*\",\"query\":{\"query_string\":{\"query\":\"metrics_name : jvm.threads.*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "JVM-Mémoire-non-heap",
"_type": "visualization",
"_source": {
"title": "JVM - Mémoire non-heap",
"visState": "{\"type\":\"line\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"showCircles\":true,\"smoothLines\":true,\"interpolate\":\"linear\",\"scale\":\"linear\",\"drawLinesBetweenPoints\":true,\"radiusRatio\":9,\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":true,\"setYExtents\":false,\"yAxis\":{},\"spyPerPage\":10},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"metrics_value\",\"json\":\"\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"metrics_name.raw\",\"size\":100,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"spy\":{\"mode\":{\"name\":null,\"fill\":false}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-*\",\"query\":{\"query_string\":{\"query\":\"metrics_name : jvm.memory.non-heap.*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
},
{
"_id": "JVM-Mémoire-heap",
"_type": "visualization",
"_source": {
"title": "JVM - Mémoire heap",
"visState": "{\"type\":\"line\",\"params\":{\"shareYAxis\":true,\"addTooltip\":true,\"addLegend\":true,\"showCircles\":true,\"smoothLines\":true,\"interpolate\":\"linear\",\"scale\":\"linear\",\"drawLinesBetweenPoints\":true,\"radiusRatio\":9,\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":true,\"setYExtents\":false,\"yAxis\":{},\"spyPerPage\":10},\"aggs\":[{\"id\":\"1\",\"type\":\"avg\",\"schema\":\"metric\",\"params\":{\"field\":\"metrics_value\",\"json\":\"\"}},{\"id\":\"2\",\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"metrics_name.raw\",\"size\":100,\"order\":\"desc\",\"orderBy\":\"1\"}}],\"listeners\":{}}",
"uiStateJSON": "{\"spy\":{\"mode\":{\"name\":null,\"fill\":false}}}",
"description": "",
"version": 1,
"kibanaSavedObjectMeta": {
"searchSourceJSON": "{\"index\":\"logstash-*\",\"query\":{\"query_string\":{\"query\":\"metrics_name : jvm.memory.heap.*\",\"analyze_wildcard\":true}},\"filter\":[]}"
}
}
}
]
What I mean is that with this logstash conf, it's for instance not easy to plot jvm.memory.non-heap.used, jvm.memory.heap.used, jvm.memory.total.used and jvm.memory.total.max on the same graph because the choice of metric is done by a query and not by a field name
@cbornet Thanks. I'm particularly interested in Dashboards that can show data of all microservices at a glance. I don't know if it's what you are working on. For this I do something with the bucket on the X-axis.
Yes that's what I tried do but it only works for simple things. I didn't manage to do the graph I did with timelion that you can see in my comment above for instance.
OK, I'm stupid : use for instance name:jvm.memory.heap.used OR name:jvm.memory.total.used OR name:"jvm.memory.non-heap.used" OR name:jvm.memory.total.max
as query and split X-Axis by name:raw and it does the graph
I'm closing this as it's done in this new separate project: https://github.com/jhipster/jhipster-console
Following this thread on the JHipster mailing list, I successfully managed to integrate metrics into the application logs and have them shipped to ELK. So I would like to propose a PR for this if you are OK with it.
I used dropwizard's Slf4jReporter on the advice of @cbornet and then used the logstash socket forwarder I recently added to microservice and gateway apps to ship logs to ELK. Finally I parse those logs with logstash to extract metrics data (logstash config file). Here is what I get in Kibana:
As you can see we have all the fields we need in Kibana to start building dashboards. So I will need some help here. It would be amazing if someone experienced with Kibana could come forward.
Then I was thinking that we could have those dashboards exported as json files and imported on startup of the ELK docker-compose with this script.
The next thing to do would be alerting. I really want to try to use Yelp/elastalert here but I'm open to other solutions.