Closed Ramalama2 closed 1 year ago
The docker container only contains the klipper-exporter, not a full prometheus installation. Mounting the prometheus.yml to that container will not have any affect. You need to run prometheus separately. The prometheus.yml config sets how prometheus will poll the exporter.
The container, or the executable, just runs and waits on port :9101 for prometheus to poll it, and then immediately queries klipper (moonraker) and returns the data. The turn-around for data collection is the 5s prometheus poll (or whatever you set it to). prometheus-klipper-exporter does not have separate polling delay, just additional time for running the query (200ms to 400ms on my system), and grafana will show the latest data from prometheus once it's returned.
e.g.
grafana ---(query)--> prometheus:9090 ---(scape 5s)--> klipper-exporter:9101 ---(query)--> moonraker:7125
Here's an example docker-compose.yml
I use for testing the exporter. Runs grafana, prometheus and the klipper-exporter in a single stack. Change the volume mounts as appropriate
version: '2'
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
hostname: prometheus
user: 1000:1000
volumes:
- ./prometheus-config/prometheus.yml:/etc/prometheus/prometheus.yml
- ./prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
- '--storage.tsdb.retention.time=30d'
expose:
- 9090
ports:
- 9090:9090
restart: unless-stopped
grafana:
hostname: grafana
image: grafana/grafana:latest
container_name: grafana
restart: unless-stopped
user: 1000:1000
volumes:
- ./grafana:/var/lib/grafana
environment:
GF_SERVER_ROOT_URL: http://localhost
GF_SECURITY_ADMIN_PASSWORD: 'password1234!@#$' # change password
GF_AUTH_ANONYMOUS_ENABLED: 'true'
GF_AUTH_ANONYMOUS_ORG_ROLE: 'Editor'
GF_SECURITY_ALLOW_EMBEDDING: 'true'
ports:
- "3000:3000/tcp"
klipper-exporter:
hostname: klipper-exporter
image: ghcr.io/scross01/prometheus-klipper-exporter:latest
container_name: klipper-exporter
restart: unless-stopped
expose:
- 9101
To start the stack run:
$ docker compose up
...
Attaching to grafana, klipper-exporter, prometheus
...
klipper-exporter | time="2022-10-13T22:54:06Z" level=info msg="Beginning to serve on port :9101"
prometheus | ts=2022-10-13T22:54:06.729Z caller=main.go:535 level=info msg="Starting Prometheus Server" mode=server version="(version=2.37.1, branch=HEAD, revision=1ce2197e7f9e95089bfb95cb61762b5a89a8c0da)"
...
grafana | logger=settings t=2022-10-13T22:54:06.760527825Z level=info msg="Starting Grafana" version=9.1.6 commit=92461d8d1e branch=HEAD compiled=2022-09-20T10:06:14Z
...
klipper-exporter | time="2022-10-13T22:54:15Z" level=info msg="Starting metrics collection of [process_stats network_stats system_info job_queue directory_info printer_objects] for klipper.home.lan:7125"
klipper-exporter | time="2022-10-13T22:54:15Z" level=info msg="Collecting process_stats for klipper.home.lan:7125"
klipper-exporter | time="2022-10-13T22:54:15Z" level=info msg="Collecting directory_info for klipper.home.lan:7125"
klipper-exporter | time="2022-10-13T22:54:15Z" level=info msg="Collecting job_queue for klipper.home.lan:7125"
klipper-exporter | time="2022-10-13T22:54:15Z" level=info msg="Collecting system_info for klipper.home.lan:7125"
klipper-exporter | time="2022-10-13T22:54:15Z" level=info msg="Collecting printer_objects for klipper.home.lan:7125"
...
Im so sorry that i reply that late :-(
And ugh, it seems like i have no clue about prometheus, thank you very much for the hint! i understand now!
I have Prometheus running already on another container, just what i should say, im new with prometheus :-)
However, i putted the prometheus.yml from the klipper-prometheus-exporter container to the prometheus container aanndd... Everything started to work instantly...
I feel so dumb.
However i feel that i was probably the first person who made that mistake, probably there should be a small oneliner hint/howto.
Thank you very much for the fast reply and the project and sorry for my idiotism xD
Should i close this "Issue"? Or you can close it Either if you like!
Idk if it helps, but just for the lazyness if someone needs autoupdate i made some scripts:
update_prometheus.sh
#!/bin/bash
imagename=prometheus
trafik_host=$imagename.mwpp.eu
dockerlink=prom/prometheus
tr_enable=true
server_port=9090
ddir=/server/$imagename
mnt_arguments="-v $ddir/prometheus.yml:/etc/prometheus/prometheus.yml"
docker stop $imagename
docker rm $imagename
dockerlink_src="$(echo "$dockerlink" | cut -d':' -f1)"
CONTAINER_image_id="$(docker images --format="{{.Repository}} {{.ID}}" | grep "^$dockerlink_src" | cut -d' ' -f2)"
docker image rm $CONTAINER_image_id
docker pull $dockerlink
docker run -l traefik.enable=$tr_enable \
-l traefik.http.routers.$imagename.rule="Host(\`$trafik_host\`)" \
-l traefik.http.routers.$imagename.entrypoints=websecure \
-l traefik.http.routers.$imagename.tls.certresolver=lets-encrypt \
-l traefik.http.services.$imagename.loadbalancer.server.port=$server_port \
-i -t -d --network=web --name=$imagename --restart=always $mnt_arguments $dockerlink
update_klipper-prometheus-exporter.sh
#!/bin/bash
imagename=pke
trafik_host=$imagename.mwpp.eu
dockerlink=ghcr.io/scross01/prometheus-klipper-exporter:latest
tr_enable=false
server_port=9101
ddir=/server/$imagename
mnt_arguments=""
docker stop $imagename
docker rm $imagename
dockerlink_src="$(echo "$dockerlink" | cut -d':' -f1)"
CONTAINER_image_id="$(docker images --format="{{.Repository}} {{.ID}}" | grep "^$dockerlink_src" | cut -d' ' -f2)"
docker image rm $CONTAINER_image_id
docker pull $dockerlink
docker run -l traefik.enable=$tr_enable \
-l traefik.http.routers.$imagename.rule="Host(\`$trafik_host\`)" \
-l traefik.http.routers.$imagename.entrypoints=websecure \
-l traefik.http.routers.$imagename.tls.certresolver=lets-encrypt \
-l traefik.http.services.$imagename.loadbalancer.server.port=$server_port \
-i -t -d --network=host --name=$imagename --restart=always $mnt_arguments $dockerlink
check_updates.sh
#!/bin/bash
FILES="/root/scripts/update_*.sh"
for f in $FILES
do
update_name="$(grep '^imagename=' $f | cut -d'=' -f2)"
echo "Checking: $update_name"
update_source="$(grep '^dockerlink=' $f | cut -d'=' -f2)"
/root/scripts/check_updates_base.sh $update_source $f
echo -e "\n"
done
IDK, how other people are updating containers, but that maybe helpfull for some.
Thanks again and Cheers!
Added an example docker based deployment configuration to the repo for the full grafana, promethues and klipper-exporter deployment https://github.com/scross01/prometheus-klipper-exporter/tree/main/example Thanks for sharing the scripts.
Hi, basically:
docker run -d -p 9101:9101 -v /server/klipper-exporter/prometheus.yml:/etc/prometheus/prometheus.yml ghcr.io/scross01/prometheus-klipper-exporter:latest
doesn't seem to do the job, im not sure how you are expecting us to mount the yaml inside the container, thats the only thing that i think is a requirement.... however, im not seeing even anything in the conainer running... i mean the container works, i can exec it etc... just aside from "/root/main" and some system files, there is nothing prometheus or klipper-exporter related inside.
Log: INFO[0000] Beginning to serve on port :9101
Thats it.
Maybe you could explain a bit, how the container should run and how you configure "prometheus-klipper-expander" without mapping any config file :-)
Other as that, this project is awesome, i always dreamed to put all the data from klipper into grafana somehow! Just i would have somehow tryed to find a way to push it into influxdb, instead of prometheus, because of all that time you wait... 15 seconds is pretty long... grafana ---(scrape 5s)--> prometheus-db ---(scrape 5s)---> prometheus-klipper-exporter ---(scrape 5s)---> moonraker..... instead of simply sth like: grafana(influxdb) <--(push)---prometheus-klipper-exporter---(scrape 5s)---> moonraker.....
However, im more as happy that there is at least any solution now! So thank you very much for that.
Cheers 👍