What happened?
The most recent release (0.16.1) moved from serviceMonitor to podMonitors. I know at least with my simple config, it doens't work at all. My simple config had worked when the chart used serviceMonitor -- the only relevant config option I had set was podMonitor.enabled to true.
What I see now is that the postgres exporter that I believe should be seen and noticed by prometheus is not.
Did you expect to see something different?
I expect timescale's postgres exporter to be recognized via the podmonitor by prometheus.
How to reproduce it (as minimally and precisely as possible):
Deploy current version of the chart with podMonitor.enabled set to true in a cluster with kube-prometheus-stack.2.
Look at prometheus / targets and see that the podMonitor is not matching anything, meaning prometheus won't collect metrics.
Environment
What happened? The most recent release (0.16.1) moved from serviceMonitor to podMonitors. I know at least with my simple config, it doens't work at all. My simple config had worked when the chart used serviceMonitor -- the only relevant config option I had set was
podMonitor.enabled
to true.What I see now is that the postgres exporter that I believe should be seen and noticed by prometheus is not.
Did you expect to see something different?
I expect timescale's postgres exporter to be recognized via the podmonitor by prometheus.
How to reproduce it (as minimally and precisely as possible):
podMonitor.enabled
set to true in a cluster with kube-prometheus-stack.2.timescaledb-single
values.yaml
?microk8s
Anything else we need to know?:
I have a PR that fixes it for me that I'll share shortly.