Closed WadeBarnes closed 7 months ago
exporter
container in traction-database-ha-*
stateful sets should be increased from 50m to 250mpgbackrest
container in traction-database-ha-*
stateful sets should be increased from 250m to 500mexporter
container in traction-database-ha-*
stateful sets should be increased from 50m to 250mpgbackrest
container in traction-database-ha-*
stateful sets should be increased from 250m to 500mexporter
container in traction-database-ha-*
stateful sets should be increased from 50m to 250mpgbackrest
container in traction-database-ha-*
stateful sets should be increased from 250m to 500m@i5okie, Where should these changes (values file) be made?
These get applied directly to postgrescluster CR.
these live in trust-over-ip-configurations
repo. Applied with kubectl apply -k bc0192/dev
From the kustomize
directory.
Though, a couple have been applied with ArgoCD. In ministry-gitops-ditp
repo.
postgres-cluster
base resource has been updated to reflect these changes, and pushed to the ministry-gitops-ditp
repo.bc0192-
dev
, test
, and prod
via ArgoCDWill continue monitoring in Grafana to make sure the changes have fixed the issue.
Changes successfully resolved the throttling issue.
The Traction database instances are being throttled at >60% on average. Review and adjust the CPU resource allocations, primarily the CPU limit to reduce or eliminate the throttling. The goal should be to reduce throttling to <25% on average. For production an even lower average may be desirable.
These metrics can be easily reviewed using the Namespace Monitoring dashboard available through Grafana in our new monitoring stack.