sentry-kubernetes / charts

Easily deploy Sentry on your Kubernetes Cluster
MIT License
1.09k stars 516 forks source link

Unclear intallation instructions #113

Closed dnlsndr closed 4 years ago

dnlsndr commented 4 years ago

Hi, Im not all to familiar with the deprecated chart, but im having some trouble installing this chart as a Helm dependency in my project. When doing a helm search sentry after adding the repo from the README.md, the Chart version is 3.1.0 as of now. But when inserting it as a Chart dependency, doing a helm dep update and looking at the values.yaml in the downloaded sentry-3.1.0.tgz, the file looks very different to the current release in this github repo. And it's not just missing comments, the whole values.yaml has a completely different structure. So now I'm not sure if the 3.1.0 Chart version is a old version or just a completely different Chart altogether. Even the Chart.yaml looks very different to the one in the latest 3.0.1 release in this repo.

I'm now quite confused as of how I could get the latest chart of this repo without downloading it from the releases page and inserting it manually. Maybe it's just confusion on my part, but I can imagine others might have the same issue. Can anyone clear that up, please?

hairmare commented 4 years ago

If your 3.1.0 looks like this: https://github.com/helm/charts/tree/162bde262b6c2ed316aaff81efd1293d7a2ff775/stable/sentry then you most likely installed stable/sentry and not the one from this repo.

Can you post the relevant requirements sections of your Chart.yaml and Chart.lock?

The dependency in the chart should look something like this:

dependencies:
  - name: sentry
    version: 3.0.1
    repository: https://sentry-kubernetes.github.io/charts
dnlsndr commented 4 years ago

Thats how mine looks, so its almost exactly the same as yours except for the version flag (I've tried it with the 3.0.1 chart version, but it has the same outcome)

dependencies:
  - name: sentry
    version: ^3.1.0
    repository: https://sentry-kubernetes.github.io/charts

No I'm not using the stable/sentry Chart, but I've just noticed that the values.yaml below does somewhat resemble the current values.yaml in this repo, but there are not comments to be found neither is the order of object in any way similar. It also has way more object keys than the one in the repo.

I just ran a json diff on both the value.yaml files and there are a ton of differences, if needed I can post the output, but it's very long.

Here is the Helm Chart values.yaml I'm talking about:

auth:
  register: true
clickhouse:
  clickhouse:
    configmap:
      builtin_dictionaries_reload_interval: "3600"
      compression:
        cases:
        - method: zstd
          min_part_size: "10000000000"
          min_part_size_ratio: "0.01"
        enabled: false
      default_session_timeout: "60"
      disable_internal_dns_cache: "1"
      enabled: true
      graphite:
        config:
        - asynchronous_metrics: true
          events: true
          events_cumulative: true
          interval: "60"
          metrics: true
          root_path: one_min
          timeout: "0.1"
        enabled: false
      keep_alive_timeout: "3"
      logger:
        count: "10"
        level: trace
        path: /var/log/clickhouse-server
        size: 1000M
      mark_cache_size: "5368709120"
      max_concurrent_queries: "100"
      max_connections: "4096"
      max_session_timeout: "3600"
      mlock_executable: false
      profiles:
        enabled: false
        profile:
        - config:
            load_balancing: random
            max_memory_usage: "10000000000"
            use_uncompressed_cache: "0"
          name: default
      quotas:
        enabled: false
        quota:
        - config:
          - duration: "3600"
            errors: "0"
            execution_time: "0"
            queries: "0"
            read_rows: "0"
            result_rows: "0"
          name: default
      remote_servers:
        enabled: true
        internal_replication: true
        replica:
          backup:
            enabled: true
          compression: true
          user: default
      umask: "022"
      uncompressed_cache_size: "8589934592"
      users:
        enabled: false
        user:
        - config:
            networks:
            - ::/0
            profile: default
            quota: default
          name: default
      zookeeper_servers:
        config:
        - host: ""
          index: ""
          port: ""
        enabled: false
        operation_timeout_ms: "10000"
        session_timeout_ms: "30000"
    http_port: "8123"
    image: yandex/clickhouse-server
    imagePullPolicy: IfNotPresent
    imageVersion: "19.16"
    ingress:
      enabled: false
    interserver_http_port: "9009"
    livenessProbe:
      enabled: true
      failureThreshold: "3"
      initialDelaySeconds: "30"
      periodSeconds: "30"
      successThreshold: "1"
      timeoutSeconds: "5"
    path: /var/lib/clickhouse
    persistentVolumeClaim:
      dataPersistentVolume:
        accessModes:
        - ReadWriteOnce
        enabled: true
        storage: 30Gi
      enabled: true
      logsPersistentVolume:
        accessModes:
        - ReadWriteOnce
        enabled: false
        storage: 50Gi
    podManagementPolicy: Parallel
    readinessProbe:
      enabled: true
      failureThreshold: "3"
      initialDelaySeconds: "30"
      periodSeconds: "30"
      successThreshold: "1"
      timeoutSeconds: "5"
    replicas: "3"
    tcp_port: "9000"
    updateStrategy: RollingUpdate
  clusterDomain: cluster.local
  enabled: true
  global: {}
  tabix:
    enabled: true
    image: spoonest/clickhouse-tabix-web-client
    imagePullPolicy: IfNotPresent
    imageVersion: stable
    ingress:
      enabled: false
    livenessProbe:
      enabled: true
      failureThreshold: "3"
      initialDelaySeconds: "30"
      periodSeconds: "30"
      successThreshold: "1"
      timeoutSeconds: "5"
    readinessProbe:
      enabled: true
      failureThreshold: "3"
      initialDelaySeconds: "30"
      periodSeconds: "30"
      successThreshold: "1"
      timeoutSeconds: "5"
    replicas: "1"
    security:
      password: admin
      user: admin
    updateStrategy:
      maxSurge: 3
      maxUnavailable: 1
      type: RollingUpdate
  timezone: UTC
config:
  configYml: |
    # No YAML Extension Config Given
  sentryConfPy: |
    # No Python Extension Config Given
  snubaSettingsPy: |
    # No Python Extension Config Given
filestore:
  backend: filesystem
  filesystem:
    path: /var/lib/sentry/files
    persistence:
      accessMode: ReadWriteOnce
      enabled: true
      persistentWorkers: false
      size: 10Gi
  gcs: null
  s3: {}
github: {}
githubSso: {}
hooks:
  dbInit:
    resources:
      limits:
        memory: 2048Mi
      requests:
        cpu: 300m
        memory: 2048Mi
  enabled: true
  snubaInit:
    resources:
      limits:
        cpu: 2000m
        memory: 1Gi
      requests:
        cpu: 700m
        memory: 1Gi
images:
  sentry:
    pullPolicy: IfNotPresent
    repository: getsentry/sentry
    tag: cc9f7d1
  snuba:
    pullPolicy: IfNotPresent
    repository: getsentry/snuba
    tag: 882be95ba0d462a29759a49b2e9aad0c1ce111a9
ingress:
  enabled: false
kafka:
  advertisedListeners: []
  affinity: {}
  allowPlaintextListener: true
  auth:
    brokerUser: user
    enabled: false
    interBrokerUser: admin
  brokerId: -1
  clusterDomain: cluster.local
  containerSecurityContext: {}
  defaultReplicationFactor: 3
  deleteTopicEnable: false
  enabled: true
  externalAccess:
    autoDiscovery:
      enabled: false
      image:
        pullPolicy: IfNotPresent
        pullSecrets: []
        registry: docker.io
        repository: bitnami/kubectl
        tag: 1.17.3-debian-10-r20
      resources:
        limits: {}
        requests: {}
    enabled: false
    service:
      annotations: {}
      loadBalancerIPs: []
      loadBalancerSourceRanges: []
      nodePorts: []
      port: 19092
      type: LoadBalancer
  externalZookeeper:
    servers: []
  extraEnvVars: []
  global: {}
  heapOpts: -Xmx1024m -Xms1024m
  image:
    debug: false
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: bitnami/kafka
    tag: 2.4.1-debian-10-r21
  listeners: []
  livenessProbe:
    initialDelaySeconds: 10
    tcpSocket:
      port: kafka
    timeoutSeconds: 5
  logFlushIntervalMessages: 10000
  logFlushIntervalMs: 1000
  logRetentionBytes: _1073741824
  logRetentionCheckIntervalMs: 300000
  logRetentionHours: 168
  logSegmentBytes: _1073741824
  logsDirs: /bitnami/kafka/data
  maxMessageBytes: _1000012
  metrics:
    jmx:
      config: |-
        jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
        lowercaseOutputName: true
        lowercaseOutputLabelNames: true
        ssl: false
        {{- if .Values.metrics.jmx.whitelistObjectNames }}
        whitelistObjectNames: ["{{ join "\",\"" .Values.metrics.jmx.whitelistObjectNames }}"]
        {{- end }}
      enabled: false
      image:
        pullPolicy: IfNotPresent
        pullSecrets: []
        registry: docker.io
        repository: bitnami/jmx-exporter
        tag: 0.12.0-debian-10-r57
      resources:
        limits: {}
        requests: {}
      service:
        annotations:
          prometheus.io/path: /
          prometheus.io/port: '{{ .Values.metrics.jmx.exporterPort }}'
          prometheus.io/scrape: "true"
        loadBalancerSourceRanges: []
        nodePort: ""
        port: 5556
        type: ClusterIP
      whitelistObjectNames:
      - kafka.controller:*
      - kafka.server:*
      - java.lang:*
      - kafka.network:*
      - kafka.log:*
    kafka:
      enabled: false
      image:
        pullPolicy: IfNotPresent
        pullSecrets: []
        registry: docker.io
        repository: bitnami/kafka-exporter
        tag: 1.2.0-debian-10-r58
      resources:
        limits: {}
        requests: {}
      service:
        annotations:
          prometheus.io/path: /metrics
          prometheus.io/port: '{{ .Values.metrics.kafka.port }}'
          prometheus.io/scrape: "true"
        loadBalancerSourceRanges: []
        nodePort: ""
        port: 9308
        type: ClusterIP
    serviceMonitor:
      enabled: false
  nodeSelector: {}
  numIoThreads: 8
  numNetworkThreads: 3
  numPartitions: 1
  numRecoveryThreadsPerDataDir: 1
  offsetsTopicReplicationFactor: 3
  pdb:
    create: true
    maxUnavailable: 1
  persistence:
    accessModes:
    - ReadWriteOnce
    annotations: {}
    enabled: true
    size: 8Gi
  podAnnotations: {}
  podSecurityContext:
    fsGroup: 1001
    runAsUser: 1001
  rbac:
    create: false
  readinessProbe:
    failureThreshold: 6
    initialDelaySeconds: 5
    tcpSocket:
      port: kafka
    timeoutSeconds: 5
  replicaCount: 3
  resources:
    limits: {}
    requests: {}
  service:
    annotations: {}
    loadBalancerSourceRanges: []
    nodePorts:
      kafka: ""
      ssl: ""
    port: 9092
    sslPort: 9093
    type: ClusterIP
  serviceAccount:
    create: true
  sidecars: {}
  socketReceiveBufferBytes: 102400
  socketRequestMaxBytes: _104857600
  socketSendBufferBytes: 102400
  sslEndpointIdentificationAlgorithm: https
  tolerations: []
  transactionStateLogMinIsr: 3
  transactionStateLogReplicationFactor: 3
  updateStrategy: RollingUpdate
  volumePermissions:
    enabled: false
    image:
      pullPolicy: Always
      pullSecrets: []
      registry: docker.io
      repository: bitnami/minideb
      tag: buster
    resources:
      limits: {}
      requests: {}
  zookeeper:
    affinity: {}
    allowAnonymousLogin: true
    auth:
      clientPassword: null
      clientUser: null
      enabled: false
      serverPasswords: null
      serverUsers: null
    autopurge:
      purgeInterval: 0
      snapRetainCount: 3
    clusterDomain: cluster.local
    enabled: true
    fourlwCommandsWhitelist: srvr, mntr
    global: {}
    heapSize: 1024
    image:
      debug: false
      pullPolicy: IfNotPresent
      registry: docker.io
      repository: bitnami/zookeeper
      tag: 3.6.0-debian-10-r21
    initLimit: 10
    listenOnAllIPs: false
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    logLevel: ERROR
    maxClientCnxns: 60
    metrics:
      affinity: {}
      enabled: false
      image:
        pullPolicy: IfNotPresent
        registry: docker.io
        repository: bitnami/zookeeper-exporter
        tag: 0.1.3-debian-10-r57
      nodeSelector: {}
      podAnnotations:
        prometheus.io/port: "9141"
        prometheus.io/scrape: "true"
      podLabels: {}
      resources: {}
      serviceMonitor:
        enabled: false
        namespace: null
      timeoutSeconds: 3
      tolerations: []
    networkPolicy:
      enabled: false
    nodeSelector: {}
    persistence:
      accessModes:
      - ReadWriteOnce
      annotations: {}
      enabled: true
      size: 8Gi
    podAnnotations: {}
    podDisruptionBudget:
      maxUnavailable: 1
    podManagementPolicy: Parallel
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    replicaCount: 1
    resources:
      requests:
        cpu: 250m
        memory: 256Mi
    securityContext:
      enabled: true
      fsGroup: 1001
      runAsUser: 1001
    service:
      electionPort: 3888
      followerPort: 2888
      port: 2181
      publishNotReadyAddresses: true
      type: ClusterIP
    serviceAccount:
      create: false
    syncLimit: 5
    tickTime: 2000
    tolerations: []
    updateStrategy: RollingUpdate
    volumePermissions:
      enabled: false
      image:
        pullPolicy: Always
        registry: docker.io
        repository: bitnami/minideb
        tag: buster
      resources: {}
  zookeeperConnectionTimeoutMs: 6000
mail:
  backend: dummy
  from: ""
  host: ""
  password: ""
  port: 25
  useTls: false
  username: ""
postgresql:
  enabled: true
  extraEnv: []
  global:
    postgresql: {}
  hostOverride: ""
  image:
    debug: false
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: bitnami/postgresql
    tag: 11.6.0-debian-10-r5
  ldap:
    baseDN: ""
    bind_password: null
    bindDN: ""
    enabled: false
    port: ""
    prefix: ""
    scheme: ""
    search_attr: ""
    search_filter: ""
    server: ""
    suffix: ""
    tls: false
    url: ""
  livenessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  master:
    affinity: {}
    annotations: {}
    extraInitContainers: ""
    extraVolumeMounts: []
    extraVolumes: []
    labels: {}
    nodeSelector: {}
    podAnnotations: {}
    podLabels: {}
    priorityClassName: ""
    tolerations: []
  metrics:
    enabled: false
    image:
      pullPolicy: IfNotPresent
      registry: docker.io
      repository: bitnami/postgres-exporter
      tag: 0.8.0-debian-10-r4
    livenessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    prometheusRule:
      additionalLabels: {}
      enabled: false
      namespace: ""
      rules: []
    readinessProbe:
      enabled: true
      failureThreshold: 6
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    securityContext:
      enabled: false
      runAsUser: 1001
    service:
      annotations:
        prometheus.io/port: "9187"
        prometheus.io/scrape: "true"
      loadBalancerIP: null
      type: ClusterIP
    serviceMonitor:
      additionalLabels: {}
      enabled: false
  nameOverride: sentry-postgresql
  networkPolicy:
    allowExternal: true
    enabled: false
  persistence:
    accessModes:
    - ReadWriteOnce
    annotations: {}
    enabled: true
    mountPath: /bitnami/postgresql
    size: 8Gi
    subPath: ""
  postgresqlDataDir: /bitnami/postgresql/data
  postgresqlDatabase: sentry
  postgresqlUsername: postgres
  readinessProbe:
    enabled: true
    failureThreshold: 6
    initialDelaySeconds: 5
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  replication:
    applicationName: my_application
    enabled: false
    numSynchronousReplicas: 1
    password: repl_password
    slaveReplicas: 2
    synchronousCommit: "on"
    user: repl_user
  resources:
    requests:
      cpu: 250m
      memory: 256Mi
  securityContext:
    enabled: true
    fsGroup: 1001
    runAsUser: 1001
  service:
    annotations: {}
    port: 5432
    type: ClusterIP
  serviceAccount:
    enabled: false
  shmVolume:
    enabled: true
  slave:
    affinity: {}
    annotations: {}
    extraInitContainers: ""
    extraVolumeMounts: []
    extraVolumes: []
    labels: {}
    nodeSelector: {}
    podAnnotations: {}
    podLabels: {}
    priorityClassName: ""
    tolerations: []
  updateStrategy:
    type: RollingUpdate
  volumePermissions:
    enabled: true
    image:
      pullPolicy: Always
      registry: docker.io
      repository: bitnami/minideb
      tag: stretch
    securityContext:
      runAsUser: 0
prefix: null
rabbitmq:
  definitions:
    policies: |-
      {
        "name": "ha-all",
        "pattern": "^((?!celeryev.*).)*$",
        "vhost": "/",
        "definition": {
          "ha-mode": "all",
          "ha-sync-mode": "automatic",
          "ha-sync-batch-size": 1
        }
      }
  enabled: true
  forceBoot: true
  nameOverride: ""
  persistentVolume:
    enabled: true
  podDisruptionBudget:
    minAvailable: 1
  rabbitmqErlangCookie: pHgpy3Q6adTskzAT6bLHCFqFTF7lMxhA
  rabbitmqPassword: guest
  rabbitmqUsername: guest
  replicaCount: 3
  resources: {}
rabbitmq-ha:
  advancedConfig: ""
  affinity: {}
  busyboxImage:
    pullPolicy: IfNotPresent
    repository: busybox
    tag: 1.30.1
  clusterDomain: cluster.local
  definitions:
    bindings: ""
    exchanges: ""
    globalParameters: ""
    parameters: ""
    permissions: ""
    policies: ""
    queues: ""
    users: ""
    vhosts: ""
  definitionsSource: definitions.json
  env: {}
  existingConfigMap: false
  existingSecret: ""
  extraConfig: ""
  extraContainers: []
  extraInitContainers: []
  extraLabels: {}
  extraPlugins: |
    rabbitmq_shovel,
    rabbitmq_shovel_management,
    rabbitmq_federation,
    rabbitmq_federation_management,
  extraVolumeMounts: []
  extraVolumes: []
  forceBoot: false
  global: {}
  image:
    pullPolicy: IfNotPresent
    repository: rabbitmq
    tag: 3.8.0-alpine
  ingress:
    annotations: {}
    enabled: false
    path: /
    tls: false
    tlsSecret: myTlsSecret
  initContainer:
    resources: {}
  lifecycle: {}
  livenessProbe:
    exec:
      command:
      - /bin/sh
      - -c
      - 'wget -O - -q --header "Authorization: Basic `echo -n \"$RABBIT_MANAGEMENT_USER:$RABBIT_MANAGEMENT_PASSWORD\"
        | base64`" http://localhost:15672/api/healthchecks/node | grep -qF "{\"status\":\"ok\"}"'
    failureThreshold: 6
    initialDelaySeconds: 120
    periodSeconds: 10
    timeoutSeconds: 5
  managementUsername: management
  nodeSelector: {}
  persistentVolume:
    accessModes:
    - ReadWriteOnce
    annotations: {}
    enabled: false
    name: data
    size: 8Gi
  podAnnotations: {}
  podAntiAffinity: soft
  podAntiAffinityTopologyKey: kubernetes.io/hostname
  podDisruptionBudget: {}
  podManagementPolicy: OrderedReady
  prometheus:
    exporter:
      capabilities: bert,no_sort
      enabled: false
      env: {}
      image:
        pullPolicy: IfNotPresent
        repository: kbudde/rabbitmq-exporter
        tag: v0.29.0
      port: 9090
      resources: {}
    operator:
      alerts:
        enabled: true
        labels: {}
        selector:
          role: alert-rules
      enabled: true
      serviceMonitor:
        interval: 10s
        namespace: monitoring
        selector:
          prometheus: kube-prometheus
  rabbitmqAmqpsSupport:
    amqpsNodePort: 5671
    config: |
      # listeners.ssl.default             = 5671
      # ssl_options.cacertfile            = /etc/cert/cacert.pem
      # ssl_options.certfile              = /etc/cert/cert.pem
      # ssl_options.keyfile               = /etc/cert/key.pem
      # ssl_options.verify                = verify_peer
      # ssl_options.fail_if_no_peer_cert  = false
    enabled: false
  rabbitmqAuth:
    config: |
      # auth_mechanisms.1 = PLAIN
      # auth_mechanisms.2 = AMQPLAIN
      # auth_mechanisms.3 = EXTERNAL
    enabled: false
  rabbitmqAuthHTTP:
    config: |
      # auth_backends.1 = http
      # auth_http.user_path     = http://some-server/auth/user
      # auth_http.vhost_path    = http://some-server/auth/vhost
      # auth_http.resource_path = http://some-server/auth/resource
      # auth_http.topic_path    = http://some-server/auth/topic
    enabled: false
  rabbitmqCert:
    cacertfile: ""
    certfile: ""
    enabled: false
    existingSecret: ""
    keyfile: ""
  rabbitmqClusterPartitionHandling: autoheal
  rabbitmqEpmdPort: 4369
  rabbitmqHipeCompile: false
  rabbitmqLDAPPlugin:
    config: |
      # auth_backends.1 = ldap
      # auth_ldap.servers.1  = my-ldap-server
      # auth_ldap.user_dn_pattern = cn=${username},ou=People,dc=example,dc=com
      # auth_ldap.use_ssl    = false
      # auth_ldap.port       = 389
      # auth_ldap.log        = false
    enabled: false
  rabbitmqMQTTPlugin:
    config: |
      # mqtt.default_user     = guest
      # mqtt.default_pass     = guest
      # mqtt.allow_anonymous  = true
    enabled: false
  rabbitmqManagerPort: 15672
  rabbitmqMemoryHighWatermark: 256MB
  rabbitmqMemoryHighWatermarkType: absolute
  rabbitmqNodePort: 5672
  rabbitmqPrometheusPlugin:
    config: |
      ## prometheus.path and prometheus.tcp.port can be set above
    enabled: false
    nodePort: null
    path: /metrics
    port: 15692
  rabbitmqSTOMPPlugin:
    config: |
      # stomp.default_user = guest
      # stomp.default_pass = guest
    enabled: false
  rabbitmqUsername: guest
  rabbitmqVhost: /
  rabbitmqWebMQTTPlugin:
    config: |
      # web_mqtt.ssl.port       = 12345
      # web_mqtt.ssl.backlog    = 1024
      # web_mqtt.ssl.certfile   = /etc/cert/cacert.pem
      # web_mqtt.ssl.keyfile    = /etc/cert/cert.pem
      # web_mqtt.ssl.cacertfile = /etc/cert/key.pem
      # web_mqtt.ssl.password   = changeme
    enabled: false
  rabbitmqWebSTOMPPlugin:
    config: |
      # web_stomp.ws_frame = binary
      # web_stomp.cowboy_opts.max_keepalive = 10
    enabled: false
  rbac:
    create: true
  readinessProbe:
    exec:
      command:
      - /bin/sh
      - -c
      - 'wget -O - -q --header "Authorization: Basic `echo -n \"$RABBIT_MANAGEMENT_USER:$RABBIT_MANAGEMENT_PASSWORD\"
        | base64`" http://localhost:15672/api/healthchecks/node | grep -qF "{\"status\":\"ok\"}"'
    failureThreshold: 6
    initialDelaySeconds: 20
    periodSeconds: 5
    timeoutSeconds: 3
  replicaCount: 3
  resources: {}
  securityContext:
    fsGroup: 101
    runAsGroup: 101
    runAsNonRoot: true
    runAsUser: 100
  service:
    amqpNodePort: null
    annotations: {}
    clusterIP: None
    epmdNodePort: null
    externalIPs: []
    loadBalancerIP: ""
    loadBalancerSourceRanges: []
    managerNodePort: null
    type: ClusterIP
  serviceAccount:
    automountServiceAccountToken: true
    create: true
  statefulSetAnnotations: {}
  terminationGracePeriodSeconds: 10
  tolerations: []
  updateStrategy: OnDelete
redis:
  cluster:
    enabled: true
    slaveCount: 2
  clusterDomain: cluster.local
  configmap: |-
    # Enable AOF https://redis.io/topics/persistence#append-only-file
    appendonly yes
    # Disable RDB persistence, AOF persistence already enabled.
    save ""
  enabled: true
  global: {}
  hostOverride: ""
  image:
    pullPolicy: IfNotPresent
    registry: docker.io
    repository: bitnami/redis
    tag: 5.0.5-debian-9-r141
  master:
    affinity: {}
    command: /run.sh
    configmap: null
    disableCommands:
    - FLUSHDB
    - FLUSHALL
    extraFlags: []
    livenessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 5
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 5
    persistence:
      accessModes:
      - ReadWriteOnce
      enabled: true
      path: /data
      size: 8Gi
      subPath: ""
    podAnnotations: {}
    podLabels: {}
    readinessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 5
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 1
    service:
      annotations: {}
      labels: {}
      loadBalancerIP: null
      port: 6379
      type: ClusterIP
    statefulset:
      updateStrategy: RollingUpdate
  metrics:
    enabled: false
    image:
      pullPolicy: IfNotPresent
      registry: docker.io
      repository: bitnami/redis-exporter
      tag: 1.1.1-debian-9-r13
    podAnnotations:
      prometheus.io/port: "9121"
      prometheus.io/scrape: "true"
    service:
      annotations: {}
      labels: {}
      type: ClusterIP
    serviceMonitor:
      enabled: false
      selector:
        prometheus: kube-prometheus
  nameOverride: sentry-redis
  networkPolicy:
    enabled: false
  persistence: {}
  rbac:
    create: false
    role:
      rules: []
  redisPort: 6379
  securityContext:
    enabled: true
    fsGroup: 1001
    runAsUser: 1001
  sentinel:
    configmap: null
    downAfterMilliseconds: 60000
    enabled: false
    failoverTimeout: 18000
    image:
      pullPolicy: IfNotPresent
      registry: docker.io
      repository: bitnami/redis-sentinel
      tag: 5.0.5-debian-9-r134
    initialCheckTimeout: 5
    livenessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 5
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 5
    masterSet: mymaster
    parallelSyncs: 1
    port: 26379
    quorum: 2
    readinessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 5
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 1
    service:
      annotations: {}
      labels: {}
      loadBalancerIP: null
      redisPort: 6379
      sentinelPort: 26379
      type: ClusterIP
  serviceAccount:
    create: false
    name: null
  slave:
    affinity: {}
    command: /run.sh
    configmap: null
    disableCommands:
    - FLUSHDB
    - FLUSHALL
    extraFlags: []
    livenessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 5
    persistence:
      accessModes:
      - ReadWriteOnce
      enabled: true
      path: /data
      size: 8Gi
      subPath: ""
    podAnnotations: {}
    podLabels: {}
    port: 6379
    readinessProbe:
      enabled: true
      failureThreshold: 5
      initialDelaySeconds: 5
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 10
    service:
      annotations: {}
      labels: {}
      loadBalancerIP: null
      port: 6379
      type: ClusterIP
    statefulset:
      updateStrategy: RollingUpdate
  sysctlImage:
    command: []
    enabled: false
    mountHostSys: false
    pullPolicy: Always
    registry: docker.io
    repository: bitnami/minideb
    resources: {}
    tag: stretch
  usePassword: false
  usePasswordFile: false
  volumePermissions:
    enabled: false
    image:
      pullPolicy: Always
      registry: docker.io
      repository: bitnami/minideb
      tag: stretch
    resources: {}
sentry:
  cron:
    affinity: {}
    env: {}
    nodeSelector: {}
    resources: {}
  postProcessForward:
    affinity: {}
    env: {}
    nodeSelector: {}
    replicas: 1
    resources: {}
  web:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 5
      minReplicas: 2
      targetCPUUtilizationPercentage: 50
    env: {}
    nodeSelector: {}
    probeInitialDelaySeconds: 10
    replicas: 1
    resources: {}
  worker:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 5
      minReplicas: 2
      targetCPUUtilizationPercentage: 50
    env: {}
    nodeSelector: {}
    replicas: 3
    resources: {}
service:
  annotations: {}
  externalPort: 9000
  name: sentry
  type: ClusterIP
slack: {}
snuba:
  api:
    affinity: {}
    autoscaling:
      enabled: false
      maxReplicas: 5
      minReplicas: 2
      targetCPUUtilizationPercentage: 50
    env: {}
    nodeSelector: {}
    probeInitialDelaySeconds: 10
    replicas: 1
    resources: {}
  consumer:
    affinity: {}
    env: {}
    nodeSelector: {}
    replicas: 1
    resources: {}
  dbInitJob:
    env: {}
  migrateJob:
    env: {}
  outcomesConsumer:
    affinity: {}
    env: {}
    nodeSelector: {}
    replicas: 1
    resources: {}
  replacer:
    affinity: {}
    env: {}
    nodeSelector: {}
    resources: {}
  sessionsConsumer:
    affinity: {}
    env: {}
    nodeSelector: {}
    replicas: 1
    resources: {}
symbolicator:
  enabled: false
system:
  adminEmail: ""
  public: false
  secretKey: icLq77rCyY_qrMMpXa6TQNjkDV6mU!c
  url: ""
user:
  create: true
  email: admin@sentry.local
  password: aaaa

And just for reference, here the repo one:

prefix:

user:
  create: true
  email: admin@sentry.local
  password: aaaa

images:
  sentry:
    repository: getsentry/sentry
    tag: cc9f7d1
    pullPolicy: IfNotPresent
    # imagePullSecrets: []
  snuba:
    repository: getsentry/snuba
    tag: 882be95ba0d462a29759a49b2e9aad0c1ce111a9
    pullPolicy: IfNotPresent
    # imagePullSecrets: []

sentry:
  web:
    replicas: 1
    env: {}
    probeInitialDelaySeconds: 10
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: []

    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50

  worker:
    replicas: 3
    # concurrency: 4
    env: {}
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: []

    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50

  cron:
    env: {}
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: []
  postProcessForward:
    replicas: 1
    env: {}
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: []

snuba:
  api:
    replicas: 1
    env: {}
    probeInitialDelaySeconds: 10
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: []

    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50

  consumer:
    replicas: 1
    env: {}
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: []

  outcomesConsumer:
    replicas: 1
    env: {}
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: []

  sessionsConsumer:
    replicas: 1
    env: {}
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: []

  replacer:
    env: {}
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: []

  dbInitJob:
    env: {}

  migrateJob:
    env: {}

hooks:
  enabled: true
  dbInit:
    resources:
      limits:
        memory: 2048Mi
      requests:
        cpu: 300m
        memory: 2048Mi
  snubaInit:
    resources:
      limits:
        cpu: 2000m
        memory: 1Gi
      requests:
        cpu: 700m
        memory: 1Gi
system:
  url: ""
  adminEmail: ""
  secretKey: 'icLq77rCyY_qrMMpXa6TQNjkDV6mU!c'
  public: false #  This should only be used if you’re installing Sentry behind your company’s firewall.

mail:
  backend: dummy # smtp
  useTls: false
  username: ""
  password: ""
  port: 25
  host: ""
  from: ""

symbolicator:
  enabled: false

auth:
  register: true

service:
  name: sentry
  type: ClusterIP
  externalPort: 9000
  annotations: {}
  # externalIPs:
  # - 192.168.0.1
  # loadBalancerSourceRanges: []

github: {} # https://github.com/settings/apps (Create a Github App)
# github:
#   appId: "xxxx"
#   appName: MyAppName
#   clientId: "xxxxx"
#   clientSecret: "xxxxx"
#   privateKey: "-----BEGIN RSA PRIVATE KEY-----\nMIIEpA" !!!! Don't forget a trailing \n
#   webhookSecret:  "xxxxx`"

githubSso: {} # https://github.com/settings/developers (Create a OAuth App)
  # clientId: "xx"
  # clientSecret: "xx"

slack: {}
# slack:
#   clientId:
#   clientSecret:
#   verificationToken:

ingress:
  enabled: false
  # annotations:
  #   kubernetes.io/tls-acme:
  #   certmanager.k8s.io/issuer:
  #   nginx.ingress.kubernetes.io/proxy-body-size:
  #
  # hostname:
  #
  # tls:
  # - secretName:
  #   hosts:

filestore:
  # Set to one of filesystem, gcs or s3 as supported by Sentry.
  backend: filesystem

  filesystem:
    path: /var/lib/sentry/files

    ## Enable persistence using Persistent Volume Claims
    ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
    persistence:
      enabled: true
      ## database data Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
      ##   GKE, AWS & OpenStack)
      ##
      # storageClass: "-"
      accessMode: ReadWriteOnce
      size: 10Gi

      ## Whether to mount the persistent volume to the Sentry worker and
      ## cron deployments. This setting needs to be enabled for some advanced
      ## Sentry features, such as private source maps. If you disable this
      ## setting, the Sentry workers will not have access to artifacts you upload
      ## through the web deployment.
      ## Please note that you may need to change your accessMode to ReadWriteMany
      ## if you plan on having the web, worker and cron deployments run on
      ## different nodes.
      persistentWorkers: false

  ## Point this at a pre-configured secret containing a service account. The resulting
  ## secret will be mounted at /var/run/secrets/google
  gcs:
    # credentialsFile: credentials.json
    #  secretName:
    #  bucketName:

  ## Currently unconfigured and changing this has no impact on the template configuration.
  s3: {}
  #  accessKey:
  #  secretKey:
  #  bucketName:
  #  endpointUrl:
  #  signature_version:
  #  region_name:
  #  default_acl:

config:
  configYml: |
    # No YAML Extension Config Given
  sentryConfPy: |
    # No Python Extension Config Given
  snubaSettingsPy: |
    # No Python Extension Config Given

clickhouse:
  enabled: true
  clickhouse:
    imageVersion: "19.16"
    configmap:
      remote_servers:
        internal_replication: true
    persistentVolumeClaim:
      enabled: true
      dataPersistentVolume:
        enabled: true
        accessModes:
        - "ReadWriteOnce"
        storage: "30Gi"

kafka:
  enabled: true
  replicaCount: 3
  allowPlaintextListener: true
  defaultReplicationFactor: 3
  offsetsTopicReplicationFactor: 3
  transactionStateLogReplicationFactor: 3
  transactionStateLogMinIsr: 3

  service:
    port: 9092

redis:
  ## Required if the Redis component of this chart is disabled. (Existing Redis)
  #
  hostOverride: ""
  enabled: true
  nameOverride: sentry-redis
  usePassword: false
  # Only used when internal redis is disabled
  # host: redis
  # Just omit the password field if your redis cluster doesn't use password
  # password: redis
  # port: 6379
  master:
    persistence:
      enabled: true

postgresql:
  ## Required if the Postgresql component of this chart is disabled. (Existing Postgres)
  #
  hostOverride: ""
  enabled: true
  nameOverride: sentry-postgresql
  postgresqlUsername: postgres
  postgresqlDatabase: sentry
  # Only used when internal PG is disabled
  # postgresqlHost: postgres
  # postgresqlPassword: postgres
  # postgresqlPort: 5432
  # postgresSslMode: require
  replication:
    enabled: false
    slaveReplicas: 2
    synchronousCommit: "on"
    numSynchronousReplicas: 1

rabbitmq:
  ## If disabled, Redis will be used instead as the broker. 
  enabled: true
  forceBoot: true
  replicaCount: 3
  rabbitmqErlangCookie: pHgpy3Q6adTskzAT6bLHCFqFTF7lMxhA
  rabbitmqUsername: guest
  rabbitmqPassword: guest
  nameOverride: ""

  podDisruptionBudget:
    minAvailable: 1

  persistentVolume:
    enabled: true
  resources: {}
  # rabbitmqMemoryHighWatermark: 600MB
  # rabbitmqMemoryHighWatermarkType: absolute

  definitions:
    policies: |-
     {
       "name": "ha-all",
       "pattern": "^((?!celeryev.*).)*$",
       "vhost": "/",
       "definition": {
         "ha-mode": "all",
         "ha-sync-mode": "automatic",
         "ha-sync-batch-size": 1
       }
     }
dnlsndr commented 4 years ago

Oh and here is the Chart.lock entry:

dependencies:
- name: sentry
  repository: https://sentry-kubernetes.github.io/charts
  version: 3.1.0
dnlsndr commented 4 years ago

This is the release from the hs-pages branch I'm talking about: https://github.com/sentry-kubernetes/charts/blob/gh-pages/sentry-3.1.0.tgz

Mokto commented 4 years ago

At first sight the values you posted seem ok? I'm not sure to understand your issue tbh.

the values.yaml that you posted seems to contains the values from all subcharts. Postgres, clickhouse, etc...

dnlsndr commented 4 years ago

My only problem is the confusion between the values.yaml in the repo and the one from the Helm chart. Why are the comments taken out and why are all the keys shuffled around compared to the repo values.yaml? I imagine it makes it generally quite confusing. You want to change a value parameter, then you look into the downloaded chart and find a values.yaml that does not contain any comments nor any formatting that makes it easier to comprehend the general hierarchy. You then go to github to check out if there might be some more documentation and find a values.yaml file that at first glance looks completely different to the one in the chart. Where is this change happening? Is there some script in the github actions that merges/rearranges the values.yaml file or so?

the values.yaml that you posted seems to contains the values from all subcharts. Postgres, clickhouse, etc...

Yeah but doesn't the repo values.yaml also contain all the values for the subcharts? I just don't understand why there is so much more stuff in the Chart values.yaml compared to the repo one.

Im not trying to nag, I know this is a very low priority issue, but it might cause some confusion when trying to install/configure the chart.

Mokto commented 4 years ago

I'm thinking that it might be because we include the subcharts zip files ? I'm not sure.

The main values.yaml doesn't contain ALL the subchart values.

dnlsndr commented 4 years ago

Ok, but why?

As I understand, the general consent in the Helm community is, that you only include the values of the subcharts that you want to change, in your main values.yaml.

Having a look at the compiled values.yaml ALL keys are inserted regardless of if they are even needed. Many object keys don't even have a value other than an empty object: someKey: {}

Mokto commented 4 years ago

I've done it because it's easier to use for local development. (I work with repository: file://../charts/sentry).

vishnu123sai commented 4 years ago

Hi It is also good to provide the resource hierarchy, like which k8s resources follows which one. It will be for the people who will use Kubernetes resources manifests instead helm chat.

Mokto commented 4 years ago

Hey I'll close this as "won't fix". It's much easier for local development. If you have another solution, I'll be happy to use it !