Closed choryuidentify closed 2 years ago
@choryuidentify I'm actually seeing the same behavior. It may be a few days before I'm able to investigate in detail but I'll try to figure it out.
@choryuidentify https://github.com/odenio/pgpool-cloudsql/pull/7 should address this -- the v1.0.10 release will include the fix.
In the meantime, if you want the spew to stop right now, you can edit configmap/pgpool-metadata-telegraf
in whatever namespace you've deployed the chart too, and add the following at the end:
[[inputs.internal]]
collect_memstats = false
...and then restart all the pods.
v1.0.10 has been released and should no longer display this behavior
Hm, I may have spoken too soon -- even with collect_memstats set to false those errors still happen; this appears to related to https://github.com/influxdata/telegraf/issues/8514 which dates back from 2020. :(
I've released version 1.0.11 of pgpool-cloudsql, which fixes the issue via the simple expedient of filtering go_gc_duration_seconds
out of telegraf's stdout/stderr streams. This will have to do in the short term: the real fix here is to update the telegraf stackdriver output plugin to fully support histogram/distribution metrics, which is probably going to require a substantial rewrite and is for sure not happening in 2022.
@n-oden Thanks, It works!
Hi.
I'm using this chart with GKE Autopilot cluster. I Install this chart, and bind pgpool serviceaccount to GCP IAM Role
roles/cloudsql.viewer
androles/monitoring.metricWriter
.It looks working, but below error logs occured.
This logs printed every 10 second...
How can I fix it?