Closed marcelinobadin closed 8 years ago
I have been thinking about this for sometime. The problem is the following: when deployed to docker (swarm, docker cloud, kubernetes, etc), volume are a headache to deal with as they must be shared between all potential hosts (in docker orchestration systems, we can't know on which host the container will be created).
A workaround is to add the logstash.conf file in the logstash docker image itself.
I have done it here as an experiment: https://github.com/PierreBesson/jhipster-preconfigured-logstash
So the docker-compose set up boils down to this:
version: '2'
services:
elk-elasticsearch:
image: elasticsearch:2.3.2
ports:
- 9200:9200
- 9300:9300
elk-logstash:
image: pbesson/jhipster-preconfigured-logstash
command: logstash -f /logstash.conf
ports:
- 5000:5000/udp
jhipster-console:
image: jhipster/jhipster-console:v1.2.1
ports:
- 5601:5601
environment:
- ELASTICSEARCH_URL=http://elk-elasticsearch:9200
No need for complex setup instructions for volumes if you just want to evaluate the console. You would need those only for advanced use cases.
Maybe JHipster should provide its own jhipster/preconfigured-logstash
image were loading the config from a volume would not be required (still provided as an option). The dockerfile only fits in two line though:
FROM logstash:version
ADD logstash.conf /
We already have something similar in the jhipster-console
image for dashboard loading.
One advantage I would see is that it would help keep in sync the logstash and kibana version. Which is a bit tricky to get right... I fear that in the future it could happen that we ship broken ELK in JHipster so this could be a way to fix things (the console is not published to npm so we have ways to correct things).
So the "upstream" elk.yml would depend on these:
@jdubois would you agree with it ?
Then that could be a jhipster/jhipster-logstash image - I'm not really worried about doing our own images, it's easy to use and distribute.
well, at this moment, I am trying to use option -e with logstash. If this works I'll tell you.
logstash -e 'input { udp { port => 5000 type => syslog codec => json } } filter { if [logger_name] =~ "metrics" { kv { source => "message" field_split => ", " prefix => "metric_" } mutate { convert => { "metric_value" => "float" } convert => { "metric_count" => "integer" } convert => { "metric_min" => "float" } convert => { "metric_max" => "float" } convert => { "metric_mean" => "float" } convert => { "metric_stddev" => "float" } convert => { "metric_median" => "float" } convert => { "metric_p75" => "float" } convert => { "metric_p95" => "float" } convert => { "metric_p98" => "float" } convert => { "metric_p99" => "float" } convert => { "metric_p999" => "float" } convert => { "metric_mean_rate" => "float" } convert => { "metric_m1" => "float" } convert => { "metric_m5" => "float" } convert => { "metric_m15" => "float" } } } mutate { add_field => { "instance_name" => "%{app_name}-%{host}:%{app_port}" } } } output { elasticsearch { hosts => ["elk-elasticsearch"] } stdout { codec => rubydebug } }'
@marcelinobadin I also thought about the long single line solution but it leads to duplicated config, in addition of being particularly ugly. Another advantage of doing the jhipster/jhipster-logstash image would be that we can be sure that they will be tagged correctly as it would be created by an automated build at docker hub built from the same git repo as the jhip-console itself.
@PierreBesson sounds a better solution. Once you have it, tell me and I will try because now, I'am working on those webhooks/elastialert we have spoken before.
Apparently the big startup line didn't work. My dashboards has no data, they don't find the metrics.
Did you try the preconfigured-logstash image above ?
Yes, I'm using it. Works fine.
I'm closing this as how to achieve it is solved. The next release will include a jhipster-logstash
image so it won't be a problem.
The issue is: when you run docker-compose it is easy, the logstash.conf file is there, locally on your computer and you map this local directory to your container.
When you try to run it directly on Docker Cloud through a slack file, what do to? It is not there.