Closed michaelpietzsch closed 3 years ago
Need to pass through port 9130.
@davidnewhall what do you mean by that?
I don't use Kubernetes, so I don't know. In Docker, you do it with -p 9130:9130
when you run docker run
.
@davidnewhall thanks for the input... but it works a bit different inside a k8s cluster...
Ive found out that its using secure http now (https) can i disable that ?
https is not the default. What makes you think it's using https? Have you gotten past connection refused
?
Ive looked in the logs.... 2021/02/02 08:27:24.981506 collector.go:134: [INFO] Prometheus exported at https://0.0.0.0:9130/ - namespace: unifipoller
https....
Can you show me your poller configuration?
[poller]
debug = true
quiet = false
plugins = []
[influxdb]
disable = false
interval = "30s"
url = "http://influx.monitoring.svc.cluster.local:8086"
user = "SNHa8BctioiPK5adY9GX"
pass = "PRIVATE"
db = "maindb"
verify_ssl = false
[prometheus]
disable = false
http_listen = "0.0.0.0:9130"
report_errors = false
[unifi]
dynamic = false
[loki]
url = "http://loki.monitoring.svc.cluster.local:3100"
[[unifi.controller]]
url = "https://10.30.0.1"
user = "unifipoller"
pass = "PRIVATE"
sites = ["all"]
save_ids = true
save_dpi = true
save_sites = true
hash_pii = false
verify_ssl = false
[[unifi.controller]]
url = "https://10.1.0.1"
user = "unifipoller"
pass = "PRIVATE"
sites = ["all"]
save_ids = true
save_dpi = true
save_sites = true
hash_pii = false
verify_ssl = false
[[unifi.controller]]
url = "https://10.10.0.12"
user = "unifipoller"
pass = "PRIVATE"
sites = ["all"]
save_ids = true
save_dpi = true
save_sites = true
hash_pii = false
verify_ssl = false ```
Your config is comprehensive. A few questions:
@davidnewhall
What's the prometheus error? still connection refused?
What's the prometheus error? still connection refused?
yep..... maybe im overseeing something in the kubernetes config....
What owns the IP 10.244.1.65
? Are you familiar with passing ports through in k8s?
apiVersion: apps/v1
kind: Deployment
metadata:
name: unifi-poller
namespace: monitoring
labels:
app: unifi-poller
type: poller
spec:
replicas: 1
selector:
matchLabels:
app: unifi-poller
type: poller
template:
metadata:
labels:
app: unifi-poller
type: poller
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "9130"
prometheus.io/scrapesecure: "true"
spec:
containers:
- name: unifi-poller
image: golift/unifi-poller:latest
ports:
- containerPort: 9130
name: tcp
protocol: TCP
- containerPort: 9130
name: udp
protocol: UDP
volumeMounts:
- name: config-volume
mountPath: /config/unifi-poller.conf
subPath: unifi-poller.conf
volumes:
- name: config-volume
configMap:
name: unifi-poller
What owns the IP
10.244.1.65
? Are you familiar with passing ports through in k8s?
there is an autodiscovery concept for prometheus... inside of kubernetes containers... as you can see the container port should be open... cluster internaly
I mean, I know yaml and I know config files. I don't know kubernetes as I've never run it. Nothing in this stands out as wrong. I'm just not able to look at it and tell if anything is missing or perhaps in the wrong place/indent/etc.
And yeah, it seems like you're adding annotations to tell prometheus how to scrape this thing. scrapesecure
should probably be false since it's not using https (but I'm only assuming that means https).
It seems like you wouldn't even need to pass the port through. In my setup, in unRAID (Docker), I run prometheus and poller in the same network bridge, so I don't forward port 9130 at all. I use docker DNS to connect to the container by name (across the Docker network). It works beautifully. I can't really pinpoint what's going on here, but maybe others can. I'll drop a note on the Discord.
@davidnewhall you've been helpful. Pinpointing the string typo in the code is already good when isolating this issue. We will see maybe someone uses kubernetes. Once everything is running i would be happy to share my yaml config as templates to the project...
@davidnewhall Issue is solved now... the change was... ive added an additional site... now somebody please explain this to me... i did also some testing added a sidecar container and could access the metrics....
Hi guys,
heres my .conf segment
[prometheus] disable = false http_listen = "0.0.0.0:9130" report_errors = false
I am using kubernetes to scrape the pod... autodiscovery works fine
annotations: prometheus.io/path: /metrics prometheus.io/port: "9130" prometheus.io/scrape: "true"
But i am getting a connection refused from the pod any ideas?