sentry-kubernetes / charts

Easily deploy Sentry on your Kubernetes Cluster
MIT License
1.06k stars 505 forks source link

sentry-ingest-monitors seems to leak postgres connections #1487

Open spuyet opened 2 days ago

spuyet commented 2 days ago

Describe the bug (actual behavior)

I recently installed the sentry helm chart and it seems that the sentry-ingestors is leaking postgres connections. After one day running with really low traffic, postgresql 100 connections were already all occupied, with only ~40 pods in total and only instance of them (unless 3 kafka controller replicas), this is making the whole sentry app go down as no more postgres connection are available.

After some days monitoring, it looks like that the pod sentry-ingest-monitors is leaking connections to postgres, restarted the postgresql pod ~16 hours ago, and this pod has 29 connections to postgres 🤔

Expected behavior

100 postgres connections should be more than enough to run this helm chart with only one pod per service, the ingestors should not swallow all postgres connections.

values.yaml

asHook: true
auth:
  register: true
clickhouse:
  clickhouse:
    configmap:
      remote_servers:
        internal_replication: true
        replica:
          backup:
            enabled: false
      users:
        enabled: false
        user:
          - config:
              networks:
                - '::/0'
              password: ''
              profile: default
              quota: default
            name: default
      zookeeper_servers:
        config:
          - hostTemplate: '{{ .Release.Name }}-zookeeper-clickhouse'
            index: clickhouse
            port: '2181'
        enabled: true
    imageVersion: 21.8.13.6
    persistentVolumeClaim:
      dataPersistentVolume:
        accessModes:
          - ReadWriteOnce
        enabled: true
        storage: 30Gi
      enabled: true
    replicas: '1'
  enabled: true
config:
  configYml: {}
  relay: |
    # No YAML relay config given
  sentryConfPy: |
    # No Python Extension Config Given
  snubaSettingsPy: |
    # No Python Extension Config Given
  web:
    httpKeepalive: 15
    maxRequests: 100000
    maxRequestsDelta: 500
    maxWorkerLifetime: 86400
discord: {}
externalClickhouse:
  database: default
  host: clickhouse
  httpPort: 8123
  password: ''
  singleNode: true
  tcpPort: 9000
  username: default
externalKafka:
  port: 9092
externalPostgresql:
  connMaxAge: 0
  database: sentry
  existingSecretKeys: {}
  port: 5432
  username: postgres
externalRedis:
  port: 6379
extraManifests: []
filestore:
  backend: filesystem
  filesystem:
    path: /var/lib/sentry/files
    persistence:
      accessMode: ReadWriteOnce
      enabled: true
      existingClaim: ''
      persistentWorkers: false
      size: 10Gi
  gcs: {}
  s3: {}
geodata:
  mountPath: ''
  path: ''
  volumeName: ''
github: {}
google: {}
hooks:
  activeDeadlineSeconds: 600
  dbCheck:
    affinity: {}
    containerSecurityContext: {}
    enabled: true
    env: []
    image:
      imagePullSecrets: []
    nodeSelector: {}
    podAnnotations: {}
    resources:
      limits:
        memory: 64Mi
      requests:
        cpu: 100m
        memory: 64Mi
    securityContext: {}
  dbInit:
    affinity: {}
    enabled: true
    env: []
    nodeSelector: {}
    podAnnotations: {}
    resources:
      limits:
        memory: 2048Mi
      requests:
        cpu: 300m
        memory: 2048Mi
    sidecars: []
    volumes: []
  enabled: true
  preUpgrade: false
  removeOnSuccess: true
  shareProcessNamespace: false
  snubaInit:
    affinity: {}
    enabled: true
    kafka:
      enabled: true
    nodeSelector: {}
    podAnnotations: {}
    resources:
      limits:
        cpu: 2000m
        memory: 1Gi
      requests:
        cpu: 700m
        memory: 1Gi
  snubaMigrate:
    enabled: true
images:
  relay:
    imagePullSecrets: []
  sentry:
    imagePullSecrets: []
  snuba:
    imagePullSecrets: []
  symbolicator:
    imagePullSecrets: []
  vroom:
    imagePullSecrets: []
ingress:
  alb:
    httpRedirect: false
  enabled: true
  regexPathStyle: nginx
ipv6: false
kafka:
  controller:
    replicaCount: 3
    resourcesPreset: large
  enabled: true
  kraft:
    enabled: true
  listeners:
    client:
      protocol: PLAINTEXT
    controller:
      protocol: PLAINTEXT
    external:
      protocol: PLAINTEXT
    interbroker:
      protocol: PLAINTEXT
  provisioning:
    enabled: true
    topics:
      - config:
          message.timestamp.type: LogAppendTime
        name: events
      - name: event-replacements
      - config:
          cleanup.policy: compact,delete
          min.compaction.lag.ms: '3600000'
        name: snuba-commit-log
      - name: cdc
      - config:
          message.timestamp.type: LogAppendTime
        name: transactions
      - config:
          cleanup.policy: compact,delete
          min.compaction.lag.ms: '3600000'
        name: snuba-transactions-commit-log
      - config:
          message.timestamp.type: LogAppendTime
        name: snuba-metrics
      - name: outcomes
      - name: outcomes-billing
      - name: ingest-sessions
      - config:
          cleanup.policy: compact,delete
          min.compaction.lag.ms: '3600000'
        name: snuba-sessions-commit-log
      - config:
          cleanup.policy: compact,delete
          min.compaction.lag.ms: '3600000'
        name: snuba-metrics-commit-log
      - name: scheduled-subscriptions-events
      - name: scheduled-subscriptions-transactions
      - name: scheduled-subscriptions-sessions
      - name: scheduled-subscriptions-metrics
      - name: scheduled-subscriptions-generic-metrics-sets
      - name: scheduled-subscriptions-generic-metrics-distributions
      - name: scheduled-subscriptions-generic-metrics-counters
      - name: events-subscription-results
      - name: transactions-subscription-results
      - name: sessions-subscription-results
      - name: metrics-subscription-results
      - name: generic-metrics-subscription-results
      - config:
          message.timestamp.type: LogAppendTime
        name: snuba-queries
      - config:
          message.timestamp.type: LogAppendTime
        name: processed-profiles
      - name: profiles-call-tree
      - config:
          max.message.bytes: '15000000'
          message.timestamp.type: LogAppendTime
        name: ingest-replay-events
      - config:
          message.timestamp.type: LogAppendTime
        name: snuba-generic-metrics
      - config:
          cleanup.policy: compact,delete
          min.compaction.lag.ms: '3600000'
        name: snuba-generic-metrics-sets-commit-log
      - config:
          cleanup.policy: compact,delete
          min.compaction.lag.ms: '3600000'
        name: snuba-generic-metrics-distributions-commit-log
      - config:
          cleanup.policy: compact,delete
          min.compaction.lag.ms: '3600000'
        name: snuba-generic-metrics-counters-commit-log
      - config:
          message.timestamp.type: LogAppendTime
        name: generic-events
      - config:
          cleanup.policy: compact,delete
          min.compaction.lag.ms: '3600000'
        name: snuba-generic-events-commit-log
      - config:
          message.timestamp.type: LogAppendTime
        name: group-attributes
      - name: snuba-attribution
      - name: snuba-dead-letter-metrics
      - name: snuba-dead-letter-sessions
      - name: snuba-dead-letter-generic-metrics
      - name: snuba-dead-letter-replays
      - name: snuba-dead-letter-generic-events
      - name: snuba-dead-letter-querylog
      - name: snuba-dead-letter-group-attributes
      - name: ingest-attachments
      - name: ingest-transactions
      - name: ingest-events
      - name: ingest-replay-recordings
      - name: ingest-metrics
      - name: ingest-performance-metrics
      - name: ingest-monitors
      - name: profiles
      - name: ingest-occurrences
      - name: snuba-spans
      - name: shared-resources-usage
      - name: snuba-metrics-summaries
  zookeeper:
    enabled: false
mail:
  backend: smtp
  from: 'xxxx'
  host: 'xxxx'
  password: 'xxxx'
  port: 587
  useSsl: false
  useTls: true
  username: 'xxxx'
memcached:
  args:
    - memcached
    - '-u memcached'
    - '-p 11211'
    - '-v'
    - '-m $(MEMCACHED_MEMORY_LIMIT)'
    - '-I $(MEMCACHED_MAX_ITEM_SIZE)'
  extraEnvVarsCM: sentry-memcached
  maxItemSize: '26214400'
  memoryLimit: '2048'
metrics:
  affinity: {}
  containerSecurityContext: {}
  enabled: false
  image:
    pullPolicy: IfNotPresent
    repository: prom/statsd-exporter
    tag: v0.17.0
  livenessProbe:
    enabled: true
    failureThreshold: 3
    initialDelaySeconds: 30
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 2
  nodeSelector: {}
  podAnnotations: {}
  readinessProbe:
    enabled: true
    failureThreshold: 3
    initialDelaySeconds: 30
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 2
  resources: {}
  securityContext: {}
  service:
    labels: {}
    type: ClusterIP
  serviceMonitor:
    additionalLabels: {}
    enabled: false
    metricRelabelings: []
    namespace: ''
    namespaceSelector: {}
    relabelings: []
    scrapeInterval: 30s
  tolerations: []
nginx:
  containerPort: 8080
  customReadinessProbe:
    failureThreshold: 3
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    tcpSocket:
      port: http
    timeoutSeconds: 3
  enabled: true
  existingServerBlockConfigmap: '{{ template "sentry.fullname" . }}'
  extraLocationSnippet: false
  metrics:
    serviceMonitor: {}
  replicaCount: 1
  resources: {}
  service:
    ports:
      http: 80
    type: ClusterIP
openai: {}
postgresql:
  auth:
    database: sentry
  connMaxAge: 0
  enabled: true
  nameOverride: sentry-postgresql
  replication:
    applicationName: sentry
    enabled: false
    numSynchronousReplicas: 1
    readReplicas: 2
    synchronousCommit: 'on'
  extendedConfiguration: |
    max_connections=300
    shared_buffers='80MB'
prefix: null
rabbitmq:
  auth:
    erlangCookie: pHgpy3Q6adTskzAT6bLHCFqFTF7lMxhA
    password: guest
    username: guest
  clustering:
    forceBoot: true
    rebalance: true
  enabled: true
  extraConfiguration: |
    load_definitions = /app/load_definition.json
  extraSecrets:
    load-definition:
      load_definition.json: |
        {
          "users": [
            {
              "name": "{{ .Values.auth.username }}",
              "password": "{{ .Values.auth.password }}",
              "tags": "administrator"
            }
          ],
          "permissions": [{
            "user": "{{ .Values.auth.username }}",
            "vhost": "/",
            "configure": ".*",
            "write": ".*",
            "read": ".*"
          }],
          "policies": [
            {
              "name": "ha-all",
              "pattern": ".*",
              "vhost": "/",
              "definition": {
                "ha-mode": "all",
                "ha-sync-mode": "automatic",
                "ha-sync-batch-size": 1
              }
            }
          ],
          "vhosts": [
            {
              "name": "/"
            }
          ]
        }
  loadDefinition:
    enabled: true
    existingSecret: load-definition
  memoryHighWatermark: {}
  nameOverride: ''
  pdb:
    create: true
  persistence:
    enabled: true
  replicaCount: 1
  resources: {}
  vhost: /
redis:
  auth:
    enabled: false
    sentinel: false
  enabled: true
  master:
    persistence:
      enabled: true
  nameOverride: sentry-redis
  replica:
    replicaCount: 1
  usePassword: false
relay:
  affinity: {}
  autoscaling:
    enabled: false
    maxReplicas: 5
    minReplicas: 2
    targetCPUUtilizationPercentage: 50
  containerSecurityContext: {}
  customResponseHeaders: []
  enabled: true
  env: []
  init:
    resources: {}
  mode: managed
  nodeSelector: {}
  probeFailureThreshold: 5
  probeInitialDelaySeconds: 10
  probePeriodSeconds: 10
  probeSuccessThreshold: 1
  probeTimeoutSeconds: 2
  processing:
    kafkaConfig:
      messageMaxBytes: 50000000
  replicas: 1
  resources: {}
  securityContext: {}
  securityPolicy: ''
  service:
    annotations: {}
  sidecars: []
  topologySpreadConstraints: []
  volumeMounts: []
  volumes: []
revisionHistoryLimit: 10
sentry:
  billingMetricsConsumer:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 3
      minReplicas: 1
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  cleanup:
    activeDeadlineSeconds: 100
    concurrency: 1
    concurrencyPolicy: Allow
    days: 90
    enabled: true
    failedJobsHistoryLimit: 5
    logLevel: ''
    schedule: 0 0 * * *
    serviceAccount: {}
    sidecars: []
    successfulJobsHistoryLimit: 5
    volumes: []
  cron:
    affinity: {}
    enabled: true
    env: []
    nodeSelector: {}
    replicas: 1
    resources: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  features:
    enableFeedback: false
    enableProfiling: false
    enableSessionReplay: true
    enableSpan: false
    orgSubdomains: false
    vstsLimitedScopes: true
  genericMetricsConsumer:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 3
      minReplicas: 1
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  ingestConsumerAttachments:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 3
      minReplicas: 1
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  ingestConsumerEvents:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 3
      minReplicas: 1
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  ingestConsumerTransactions:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 3
      minReplicas: 1
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  ingestMonitors:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 3
      minReplicas: 1
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  ingestOccurrences:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 3
      minReplicas: 1
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  ingestProfiles:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 3
      minReplicas: 1
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  ingestReplayRecordings:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 3
      minReplicas: 1
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  metricsConsumer:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 3
      minReplicas: 1
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  postProcessForwardErrors:
    affinity: {}
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  postProcessForwardIssuePlatform:
    affinity: {}
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  postProcessForwardTransactions:
    affinity: {}
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  singleOrganization: true
  subscriptionConsumerEvents:
    affinity: {}
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  subscriptionConsumerGenericMetrics:
    affinity: {}
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  subscriptionConsumerMetrics:
    affinity: {}
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  subscriptionConsumerSessions:
    affinity: {}
    containerSecurityContext: {}
    env: []
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  subscriptionConsumerTransactions:
    affinity: {}
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  web:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 5
      minReplicas: 2
      targetCPUUtilizationPercentage: 50
    containerSecurityContext: {}
    customResponseHeaders: []
    enabled: true
    env: []
    nodeSelector: {}
    probeFailureThreshold: 5
    probeInitialDelaySeconds: 10
    probePeriodSeconds: 10
    probeSuccessThreshold: 1
    probeTimeoutSeconds: 2
    replicas: 1
    resources: {}
    securityContext: {}
    securityPolicy: ''
    service:
      annotations: {}
    sidecars: []
    strategyType: RollingUpdate
    topologySpreadConstraints: []
    volumeMounts: []
    volumes: []
  worker:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 5
      minReplicas: 2
      targetCPUUtilizationPercentage: 50
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      failureThreshold: 3
      periodSeconds: 60
      timeoutSeconds: 10
    nodeSelector: {}
    replicas: 1
    resources: {}
    sidecars: []
    topologySpreadConstraints: []
    volumeMounts: []
    volumes: []
  workerEvents:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 5
      minReplicas: 2
      targetCPUUtilizationPercentage: 50
    enabled: false
    env: []
    livenessProbe:
      enabled: false
      failureThreshold: 3
      periodSeconds: 60
      timeoutSeconds: 10
    nodeSelector: {}
    queues: events.save_event,post_process_errors
    replicas: 1
    resources: {}
    sidecars: []
    topologySpreadConstraints: []
    volumeMounts: []
    volumes: []
  workerTransactions:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 5
      minReplicas: 2
      targetCPUUtilizationPercentage: 50
    enabled: false
    env: []
    livenessProbe:
      enabled: false
      failureThreshold: 3
      periodSeconds: 60
      timeoutSeconds: 10
    nodeSelector: {}
    queues: events.save_event_transaction,post_process_transactions
    replicas: 1
    resources: {}
    sidecars: []
    topologySpreadConstraints: []
    volumeMounts: []
    volumes: []
service:
  annotations: {}
  externalPort: 9000
  name: sentry
  type: ClusterIP
serviceAccount:
  annotations: {}
  automountServiceAccountToken: true
  enabled: false
  name: sentry
slack: {}
snuba:
  api:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 5
      minReplicas: 2
      targetCPUUtilizationPercentage: 50
    command: []
    containerSecurityContext: {}
    enabled: true
    env: []
    liveness:
      timeoutSeconds: 2
    nodeSelector: {}
    probeInitialDelaySeconds: 10
    readiness:
      timeoutSeconds: 2
    replicas: 1
    resources: {}
    securityContext: {}
    service:
      annotations: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  clickhouse:
    maxConnections: 100
  consumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  dbInitJob:
    env: []
  genericMetricsCountersConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  genericMetricsDistributionConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  genericMetricsSetsConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  groupAttributesConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  issueOccurrenceConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  metricsConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  migrateJob:
    env: []
  outcomesBillingConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchSize: '3'
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  outcomesConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchSize: '3'
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  profilingFunctionsConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  profilingProfilesConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  replacer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  replaysConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  rustConsumer: false
  sessionsConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    env: []
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  spansConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  subscriptionConsumerEvents:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  subscriptionConsumerMetrics:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  subscriptionConsumerSessions:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    env: []
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
  subscriptionConsumerTransactions:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
  transactionsConsumer:
    affinity: {}
    autoOffsetReset: earliest
    containerSecurityContext: {}
    enabled: true
    env: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    maxBatchTimeMs: 750
    nodeSelector: {}
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
sourcemaps:
  enabled: false
symbolicator:
  api:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 5
      minReplicas: 2
      targetCPUUtilizationPercentage: 50
    config: |-
      # See: https://getsentry.github.io/symbolicator/#configuration
      cache_dir: "/data"
      bind: "0.0.0.0:3021"
      logging:
        level: "warn"
      metrics:
        statsd: null
        prefix: "symbolicator"
      sentry_dsn: null
      connect_to_reserved_ips: true
      # caches:
      #   downloaded:
      #     max_unused_for: 1w
      #     retry_misses_after: 5m
      #     retry_malformed_after: 5m
      #   derived:
      #     max_unused_for: 1w
      #     retry_misses_after: 5m
      #     retry_malformed_after: 5m
      #   diagnostics:
      #     retention: 1w
    containerSecurityContext: {}
    env: []
    nodeSelector: {}
    persistence:
      accessModes:
        - ReadWriteOnce
      enabled: true
      size: 10Gi
    probeInitialDelaySeconds: 10
    replicas: 1
    resources: {}
    securityContext: {}
    topologySpreadConstraints: []
    usedeployment: true
  cleanup:
    enabled: false
  enabled: false
system:
  adminEmail: ''
  public: false
  url: ''
user:
  create: true
  email: admin@sentry.local
  password: aaaa
vroom:
  affinity: {}
  autoscaling:
    enabled: false
    maxReplicas: 5
    minReplicas: 2
    targetCPUUtilizationPercentage: 50
  containerSecurityContext: {}
  env: []
  nodeSelector: {}
  probeFailureThreshold: 5
  probeInitialDelaySeconds: 10
  probePeriodSeconds: 10
  probeSuccessThreshold: 1
  probeTimeoutSeconds: 2
  replicas: 1
  resources: {}
  securityContext: {}
  service:
    annotations: {}
  sidecars: []
  volumeMounts: []
  volumes: []
zookeeper:
  enabled: true
  nameOverride: zookeeper-clickhouse
  replicaCount: 1
global:
  cattle:
    systemProjectId: p-2wcm2

Helm chart version

25.10.0

Steps to reproduce

Install the chart, plug an app with a bit of traffic and wait for some days :)

Screenshots

Screenshot 2024-09-27 at 08 35 46

Logs

No response

Additional context

No response

patsevanton commented 2 days ago

Can you run command for get short values?

helm get values -n namespace sentry

What were the errors?