I recently installed the sentry helm chart and it seems that the sentry-ingestors is leaking postgres connections. After one day running with really low traffic, postgresql 100 connections were already all occupied, with only ~40 pods in total and only instance of them (unless 3 kafka controller replicas), this is making the whole sentry app go down as no more postgres connection are available.
After some days monitoring, it looks like that the pod sentry-ingest-monitors is leaking connections to postgres, restarted the postgresql pod ~16 hours ago, and this pod has 29 connections to postgres 🤔
Expected behavior
100 postgres connections should be more than enough to run this helm chart with only one pod per service, the ingestors should not swallow all postgres connections.
Describe the bug (actual behavior)
I recently installed the sentry helm chart and it seems that the sentry-ingestors is leaking postgres connections. After one day running with really low traffic, postgresql 100 connections were already all occupied, with only ~40 pods in total and only instance of them (unless 3 kafka controller replicas), this is making the whole sentry app go down as no more postgres connection are available.
After some days monitoring, it looks like that the pod
sentry-ingest-monitors
is leaking connections to postgres, restarted the postgresql pod ~16 hours ago, and this pod has 29 connections to postgres 🤔Expected behavior
100 postgres connections should be more than enough to run this helm chart with only one pod per service, the ingestors should not swallow all postgres connections.
values.yaml
Helm chart version
25.10.0
Steps to reproduce
Install the chart, plug an app with a bit of traffic and wait for some days :)
Screenshots
Logs
No response
Additional context
No response