Closed bilalbokharee closed 2 years ago
@bilalbokharee I'm in the exact situation as you - not sure I want to install another prometheus Did you manage to get it working?
Hi @bilalbokharee
It looks like you are referencing older monitoring documentation from our v4 operator and trying to use that with our newer v5 PGO operator. The documents you are looking for are located in a few places. Please see the architecture documents for monitoring as well as installation and our examples located here for v5.1.1, this should give you all the information you require to use your previously deployed grafana and prometheus.
Please see the Custom Backup Configuration for PGBackrest custom configuration.
If you continue to have further issues after reviewing our monitoring documentation for our v5 PGO operator please reach out.
Closing this issue. Please re-open if you require further assistance.
@bilalbokharee I'm in the exact situation as you - not sure I want to install another prometheus Did you manage to get it working?
We had the same issue, too, and we basically just circumvented all the Crunchy silliness as I think they (for whatever reason?) really want to force you into using their monitoring stack (fine, I'll pay the extra $$$, but why make me run 2x Prometheus and 2x Grafana instances, especially b/c any respectable production stack doesn't run that stuff on-cluster with the dbs and it gets exported to a monitoring cluster with Thanos or whatever).
Anyway, we just wired up our own postgres-exporter container using the sidecar feature of the operator to get around this silliness.
Having the same issue as I am trying to set this up myself. Wasted so much time, guess i'll just move to something else
Hello @bilalbokharee, You just added the exporter as side container to PostgresSQL POD. Now you need to configure for example PodMonitor which will scrape the data to your Prometheus instance. For example in *.tfvars.json (which act as a values.yml input) you can declare in "prometheus" section, "additionalPodMonitors". Very basic configuration:
{
"name": "crunchy-postgres-exporter",
"namespace": "prometheus",
"namespaceSelector": {
"matchNames": [
"postgres-db"
]
},
"selector": {
"matchLabels": {
"postgres-operator.crunchydata.com/crunchy-postgres-exporter": "true"
}
},
"podTargetLabels": [
"deployment",
"role",
"pg_cluster"
],
"podMetricsEndpoints": [
{
"port": "exporter",
"path": "/metrics",
"honorLabels": true,
"interval": "10s"
}
]
}
]
}
I hope it will help you :)
Overview
I have grafana and prometheus already setup in my cluster and I want to connect pg_monitor to it and expose metrics to use in pre-deployed grafana. I have only added
monitoring
section in cluster instance so far. Documentation tells me that I have to do some installations for PGO Monitoring stack separately, but the pgo-metrics-deployer-cm configMap contains keys likegrafana_install
andprometheus_install
etc. I do not want to do fresh installations. I want to use my current setup and I want everything to come via gitops and nothing manually. themonitoring
section i have added in cluster instance adds a new container in postgres instance pod and that fails with errorspsql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: password authentication failed for user "ccp_monitoring"
andconnection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections?
What am I missing and what do I need to do to create this user? Also, documentation says that CR and CRB do not need to be created separately if created before? It's confusing. Can I not setup monitoring via OLM somehow? Or define all that needs to be defined in postgres instance CR manifest.
Side question: How do I specify time interval for pgBackRest to take backups. Its too frequent by default and fills up the storage pretty fast
Environment
Please provide the following details:
OpenShift
4.10.0
ubi8-5.1.0-0
14.2
Steps to Reproduce
REPRO
Provide steps to get to the error condition:
EXPECTED
ACTUAL