Open uschtwill opened 7 years ago
I got that working. But Security is out the door pretty much. I'll see if i can make a pull request.
Op 13 december 2016 om 20:08:09, Wilhelm Uschtrin (notifications@github.com) schreef:
@riemers https://github.com/riemers was working on this in #5 https://github.com/uschtwill/docker_monitoring_logging_alerting/issues/5. Not sure, if this is ready yet?!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/uschtwill/docker_monitoring_logging_alerting/issues/11, or mute the thread https://github.com/notifications/unsubscribe-auth/AApoOUwWUYmjXEqok6T512t9xOaP4k55ks5rHu0ZgaJpZM4LMG49 .
Awesome, let's have a look!
Yeah... I did some research on security just now. There's different ways, I guess.
Apparently the guys over at Prometheus decided that node_exporter itself shouldn't provide any security measures, so no TLS/SSL and no basic auth. They're focusing on what Prometheus is all about, monitoring, and advise people to handle the situation by using a reverse proxy. Check here and here. cAdvisor seems to take the UNIX approach as well.
Logstash and filebeat on the other hand could be secured by SSL/TSL 'natively': https://www.elastic.co/guide/en/beats/filebeat/current/configuring-ssl-logstash.html
So plan A would be to just reverse proxy and SSL/TLS everything by hand.
Plan B.1 really would be to just set up a Docker Swarm, which gives you (among other benefits) an Overlay network which takes care of all "over the internet" routing, encryption and service discovery. With Docker Swarm we should be able to pretty much deploy your docker-compose.yml from #5 as is, and it should magically work and be secure. No need for any port forwarding either.
Plan B.2 would be to go for something more all-encompassing like Mesos or Kubernetes right away, while you're at it.
Plan C would be to roll your own networking with one of these or (Open)VPN.
Plan D would be to go for SSH tunnels. Why not? ;)
I think Plan B.1 sounds good?!
The swarm setup could be done in one line in the setup.sh and we could just wrap the docker-compose.yml for other hosts in a small script with another couple of lines.
What do you think?
version: '2'
services:
#########################################################
#### LOGGING ####
#########################################################
# Runs on your node(s) and forwards all logs to Logstash.
filebeat:
container_name: filebeat
image: prima/filebeat
volumes:
- ./filebeat.yml:/filebeat.yml
restart: always
labels:
container_group: monitoring
logging:
driver: gelf
options:
gelf-address: udp://<your ip>:12201
labels: container_group
#########################################################
#### MONITORING ####
#########################################################
# Runs on your node(s) and forwards node(host) metrics to Prometheus.
nodeexporter:
container_name: nodeexporter
image: prom/node-exporter
expose:
- 9100
ports:
- 9100:9100
restart: always
labels:
container_group: monitoring
logging:
driver: gelf
options:
gelf-address: udp://<your ip>:12201
labels: container_group
#########################################################
# Runs on your node(s) and forwards container metrics to Prometheus.
cadvisor:
container_name: cadvisor
image: google/cadvisor:v0.24.1
ports:
- 8080:8080
expose:
- 8080
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"
restart: always
labels:
container_group: monitoring
logging:
driver: gelf
options:
gelf-address: udp://<your ip>:12201
labels: container_group
#########################################################
Then your docker-compose of the other side (for logstash etc) needs to be opened up too.
So for logstash that was:
ports:
- 5044:5044
- 0.0.0.0:8090:8080
- 12201:12201/udp
expose:
- 5044
- 8080
- 12201/udp
Might did more, but since i adjusted the files so much to fit my needs (certs and all, creating a PR makes it a bit messy)
Also changed the grafana a bit more to show with extra dockerized apps in it and the multiple hosts, i think if you check it and tweak it back a bit that might be of some help.
(txt file since it didn't except json) Main Overview-1482229746962.txt
Alright, looks good!
But I really would like to have this work in a swarm (option B.1 from above), since otherwise (if it's not in a VPN) logs and metrics would just be flying through the web unencrypted and ports would be exposed to the wild. Especially as having encryption and such will be the only way for it to be compatible with the new secure
mode.
I just tried setting this up with docker swarm init
on the 'master' and docker swarm join
on the node, but swarm mode
, docker-compose
, docker run
and overlay networks
are not really compatible enough with each other to achieve this easily.
Docker 1.13 should make this possible though, see: https://github.com/docker/compose/issues/3656 https://github.com/docker/docker/issues/23901
Let's wait until then...
I also noticed, that this will require to add jobs to the master's prometheus.yml
, which will require a short write up.
Nothing wrong with using swam, but that would make all docker 'servers' a node on the swarm cluster. You can still run normal docker containers on the host, without interrupting the swarm. Would then be limited to the version of docker you are running (1.13 in that case) i do like the benefit of not having to configure and close down all kinds of stuff because its in its own overlay. Only need a user/pwd or something on the frontends, but that should be it. Have not looked into the jobs, but the end of the year is near, so we are always more busy then normal.
What is the status of this?
Well i posted an example, other then that i don't know much anymore its been a year, new software, new options, new updates..
@ScottBrenner
What @riemers said...
The requirements I listed above have been met though, so someone could implement this now...
Docker 1.13 should make this possible though, see: docker/compose#3656 docker/docker#23901
@riemers was working on this in #5. Not sure, if this is ready yet?!