rancher / terraform-provider-rancher2

Terraform Rancher2 provider
https://www.terraform.io/docs/providers/rancher2/
Mozilla Public License 2.0
260 stars 223 forks source link

Bug - rancher2_app_v2 is not idempotent #500

Closed ghost closed 3 years ago

ghost commented 3 years ago

Hello,

at the moment im terraforming the new rancher 2.5 rancher-monitoring. When i apply the same code multiple times it want to change stuff over an over again and is therefore not idempotent.

My Setup:

Terraform:

resource "rancher2_app_v2" "rancher-monitoring" {
  cluster_id = var.rancher_cluster_id
  name = "rancher-monitoring"
  namespace = "cattle-monitoring-system"
  repo_name = "rancher-charts"
  chart_name = "rancher-monitoring"
  chart_version = "9.4.200"
  values = templatefile("${path.module}/templates/rancher-monitoring.values.yml", {})
}

rancher-monitoring.values.yml

rkeControllerManager:
  enabled: true

rkeScheduler:
  enabled: true

rkeProxy:
  enabled: true

rkeEtcd:
  enabled: true

alertmanager:
  alertmanagerSpec:
    useExistingSecret: true
    configSecret: alertmanager-rancher-monitoring-alertmanager
  config:
    global:
      resolve_timeout: 5m
    receivers:
    - name: "null"
    route:
      group_by:
      - job
      group_interval: 5m
      group_wait: 30s
      receiver: 'null'
      repeat_interval: 12h
      routes:
      - match:
          alertname: Watchdog
        receiver: 'null'
    templates:
      - /etc/alertmanager/config/*.tmpl

prometheus:
  prometheusSpec:
    retention: 30d
    retentionSize: 50Gi
    storageSpec:
      volumeClaimTemplate:
        spec:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 50Gi

the apply of this code works without problems.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # rancher2_app_v2.rancher-monitoring will be created
  + resource "rancher2_app_v2" "rancher-monitoring" {
      + annotations                 = (known after apply)
      + chart_name                  = "rancher-monitoring"
      + chart_version               = "9.4.200"
      + cleanup_on_fail             = false
      + cluster_id                  = "c-4zpxt"
      + cluster_name                = (known after apply)
      + disable_hooks               = false
      + disable_open_api_validation = false
      + force_upgrade               = false
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + name                        = "rancher-monitoring"
      + namespace                   = "cattle-monitoring-system"
      + repo_name                   = "rancher-charts"
      + values                      = <<~EOT
            rkeControllerManager:
              enabled: true

            rkeScheduler:
              enabled: true

            rkeProxy:
              enabled: true

            rkeEtcd:
              enabled: true

            alertmanager:
              alertmanagerSpec:  ## Why are the next two lines needed? (that should be default)
                useExistingSecret: true
                configSecret: alertmanager-rancher-monitoring-alertmanager
              config:
                global:
                  resolve_timeout: 5m
                receivers:
                - name: "null"
                route:
                  group_by:
                  - job
                  group_interval: 5m
                  group_wait: 30s
                  receiver: 'null'
                  repeat_interval: 12h
                  routes:
                  - match:
                      alertname: Watchdog
                    receiver: 'null'
                templates:
                  - /etc/alertmanager/config/*.tmpl

            prometheus:
              prometheusSpec:
                retention: 30d
                retentionSize: 50Gi
                storageSpec:
                  volumeClaimTemplate:
                    spec:
                      accessModes:
                      - ReadWriteOnce
                      resources:
                        requests:
                          storage: 50Gi
        EOT
      + wait                        = false
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

rancher2_app_v2.rancher-monitoring: Creating...
rancher2_app_v2.rancher-monitoring: Still creating... [10s elapsed]
rancher2_app_v2.rancher-monitoring: Still creating... [20s elapsed]
rancher2_app_v2.rancher-monitoring: Still creating... [30s elapsed]
rancher2_app_v2.rancher-monitoring: Still creating... [40s elapsed]
rancher2_app_v2.rancher-monitoring: Still creating... [50s elapsed]
rancher2_app_v2.rancher-monitoring: Still creating... [1m0s elapsed]
rancher2_app_v2.rancher-monitoring: Still creating... [1m10s elapsed]
rancher2_app_v2.rancher-monitoring: Still creating... [1m20s elapsed]
rancher2_app_v2.rancher-monitoring: Creation complete after 1m28s [id=cattle-monitoring-system/rancher-monitoring]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

When apply the same code again i get the following change:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # rancher2_app_v2.rancher-monitoring will be updated in-place
  ~ resource "rancher2_app_v2" "rancher-monitoring" {
        annotations                 = {
            "objectset.rio.cattle.io/applied"         = ....
        chart_name                  = "rancher-monitoring"
        chart_version               = "9.4.200"
        cleanup_on_fail             = false
        cluster_id                  = "c-4zpxt"
        cluster_name                = "tfm-rancher-monitoring"
        disable_hooks               = false
        disable_open_api_validation = false
        force_upgrade               = false
        id                          = "cattle-monitoring-system/rancher-monitoring"
        labels                      = {
            "objectset.rio.cattle.io/hash" = "afd0d9d7cfc6e6d7ab5c7044fb2bc771e55109c1"
        }
        name                        = "rancher-monitoring"
        namespace                   = "cattle-monitoring-system"
        repo_name                   = "rancher-charts"
      ~ values                      = <<~EOT
          + rkeControllerManager:
          +   enabled: true
          + 
          + rkeScheduler:
          +   enabled: true
          + 
          + rkeProxy:
          +   enabled: true
          + 
          + rkeEtcd:
          +   enabled: true
          + 
            alertmanager:
              alertmanagerSpec:
          -     configSecret: alertmanager-rancher-monitoring-alertmanager
                useExistingSecret: true
          +     configSecret: alertmanager-rancher-monitoring-alertmanager
              config:
                global:
                  resolve_timeout: 5m
                receivers:
                - name: "null"
                route:
                  group_by:
                  - job
                  group_interval: 5m
                  group_wait: 30s
          -       receiver: "null"
          +       receiver: 'null'
                  repeat_interval: 12h
                  routes:
                  - match:
                      alertname: Watchdog
          -         receiver: "null"
          +         receiver: 'null'
                templates:
          -     - /etc/alertmanager/config/*.tmpl
          - global:
          -   cattle:
          -     clusterId: c-4zpxt
          -     clusterName: tfm-rancher-monitoring
          +       - /etc/alertmanager/config/*.tmpl
          + 
            prometheus:
              prometheusSpec:
                retention: 30d
                retentionSize: 50Gi
                storageSpec:
                  volumeClaimTemplate:
                    spec:
                      accessModes:
                      - ReadWriteOnce
                      resources:
                        requests:
                          storage: 50Gi
          - rkeControllerManager:
          -   enabled: true
          - rkeEtcd:
          -   enabled: true
          - rkeProxy:
          -   enabled: true
          - rkeScheduler:
          -   enabled: true
        EOT
        wait                        = false
    }

Plan: 0 to add, 1 to change, 0 to destroy.

What i get so far there is a convert from single quotes to double ones. That i can fix easily myself in my values. But the rest should not be shown.

Can you please tell me what im doing wrong or how to fix that :) (i guess its a bug in the provider function)

Additionally i dont get those three lines i don't want to override stuff here but they are needed otherwise nothing will be deployed:

              alertmanagerSpec:  ## Why are the next two lines needed? (that should be default)
                useExistingSecret: true
                configSecret: alertmanager-rancher-monitoring-alertmanager

Best regards

Stefan

rawmind0 commented 3 years ago

Hi @oed-mertenss , what tf provider version are you using?? Not able to reproduce your issue with same config.

What i get so far there is a convert from single quotes to double ones. That i can fix easily myself in my values. But the rest should not be shown.

You can check it, but not generating a diff in my tests

Additionally i dont get those three lines i don't want to override stuff here but they are needed otherwise nothing will be deployed:

             alertmanagerSpec:  ## Why are the next two lines needed? (that should be default)
               useExistingSecret: true
               configSecret: alertmanager-rancher-monitoring-alertmanager

This is just needed if rancher-monitoring is being reinstalled, as commented at https://registry.terraform.io/providers/rancher/rancher2/latest/docs/guides/apps_marketplace#examples-1

ghost commented 3 years ago

Regarding the provider version (i use latest):

    rancher2 = {
      source  = "rancher/rancher2"
      version = "~> 1.10.4"
    }

I stripped the example down to the minimum:

rkeControllerManager:
  enabled: true

rkeScheduler:
  enabled: true

rkeProxy:
  enabled: true

rkeEtcd:
  enabled: true

alertmanager:
  alertmanagerSpec:
    enabled: false
    useExistingSecret: true
    configSecret: alertmanager-rancher-monitoring-alertmanager

Executing twice i get:

      ~ values                      = <<~EOT
          - alertmanager:
          -   alertmanagerSpec:
          -     configSecret: alertmanager-rancher-monitoring-alertmanager
          -     enabled: false
          -     useExistingSecret: true
          - global:
          -   cattle:
          -     clusterId: c-4zpxt
          -     clusterName: tfm-rancher-monitoring
            rkeControllerManager:
              enabled: true
          - rkeEtcd:
          + 
          + rkeScheduler:
              enabled: true
          + 
            rkeProxy:
              enabled: true
          - rkeScheduler:
          + 
          + rkeEtcd:
              enabled: true
          + 
          + alertmanager:
          +   alertmanagerSpec:
          +     enabled: false
          +     useExistingSecret: true
          +     configSecret: alertmanager-rancher-monitoring-alertmanager
        EOT
        wait                        = false
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Regarding alertmanagerSpec i will analyse it later we should focus on the tickets topic i guess it will be fixed with the indempotence anyway.

I tried your example. First run = no problem. Second run:

      ~ values                      = <<~EOT
            alertmanager:
              alertmanagerSpec:
          -     configSecret: alertmanager-rancher-monitoring-alertmanager
                enabled: false
                useExistingSecret: true
          - global:
          -   cattle:
          -     clusterId: c-4zpxt
          -     clusterName: tfm-rancher-monitoring
          +     configSecret: alertmanager-rancher-monitoring-alertmanager
        EOT
        wait                        = false
    }

Who brings the global cattle thing in?

rawmind0 commented 3 years ago

Who brings the global cattle thing in?

The provider is adding global.cattle.cluster*, but it's taken into account to supress resource diff, https://github.com/rancher/terraform-provider-rancher2/blob/master/rancher2/schema_app_v2.go#L94

With same config, i'm not able to reproduce your issue

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # rancher2_app_v2.rancher-monitoring will be created
  + resource "rancher2_app_v2" "rancher-monitoring" {
      + annotations                 = (known after apply)
      + chart_name                  = "rancher-monitoring"
      + chart_version               = "9.4.200"
      + cleanup_on_fail             = false
      + cluster_id                  = "local"
      + cluster_name                = (known after apply)
      + disable_hooks               = false
      + disable_open_api_validation = false
      + force_upgrade               = false
      + id                          = (known after apply)
      + labels                      = (known after apply)
      + name                        = "rancher-monitoring"
      + namespace                   = "cattle-monitoring-system"
      + repo_name                   = "rancher-charts"
      + values                      = <<~EOT
            rkeScheduler:
              enabled: true

            rkeProxy:
              enabled: true

            rkeEtcd:
              enabled: true

            alertmanager:
              alertmanagerSpec:
                useExistingSecret: true
                configSecret: alertmanager-rancher-monitoring-alertmanager
              config:
                global:
                  resolve_timeout: 5m
                receivers:
                - name: "null"
                route:
                  group_by:
                  - job
                  group_interval: 5m
                  group_wait: 30s
                  receiver: 'null'
                  repeat_interval: 12h
                  routes:
                  - match:
                      alertname: Watchdog
                    receiver: 'null'
                templates:
                  - /etc/alertmanager/config/*.tmpl

            prometheus:
              prometheusSpec:
                retention: 30d
                retentionSize: 50Gi
                requests:
                  cpu: "250m"
                  memory: "250Mi"
                storageSpec:
                  volumeClaimTemplate:
                    spec:
                      accessModes:
                      - ReadWriteOnce
                      resources:
                        requests:
                          storage: 50Gi
        EOT
      + wait                        = false
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

rancher2_app_v2.rancher-monitoring: Creating...
rancher2_app_v2.rancher-monitoring: Still creating... [10s elapsed]
rancher2_app_v2.rancher-monitoring: Still creating... [20s elapsed]
rancher2_app_v2.rancher-monitoring: Still creating... [30s elapsed]
rancher2_app_v2.rancher-monitoring: Creation complete after 31s [id=cattle-monitoring-system/rancher-monitoring]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
$ terraform apply
rancher2_app_v2.rancher-monitoring: Refreshing state... [id=cattle-monitoring-system/rancher-monitoring]

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
ghost commented 3 years ago

Wired what im doing different. Can you please show your: terraform providers

rawmind0 commented 3 years ago

Wired what im doing different. Can you please show your: terraform providers

terraform {
  required_providers {
    rancher2 = {
      source = "terraform-providers/rancher2"
      version = "1.10.4"
    }
  }
  required_version = ">= 0.13"
}
ghost commented 3 years ago

I have the same. I started from scratch with a complete new cluster but i still get the same problems.

        }
        chart_name                  = "rancher-monitoring"
        chart_version               = "9.4.200"
        cleanup_on_fail             = false
        cluster_id                  = "c-8kxzb"
        cluster_name                = "tfm-rancher-monitoring"
        disable_hooks               = false
        disable_open_api_validation = false
        force_upgrade               = false
        id                          = "cattle-monitoring-system/rancher-monitoring"
        labels                      = {
            "objectset.rio.cattle.io/hash" = "afd0d9d7cfc6e6d7ab5c7044fb2bc771e55109c1"
        }
        name                        = "rancher-monitoring"
        namespace                   = "cattle-monitoring-system"
        repo_name                   = "rancher-charts"
      ~ values                      = <<~EOT
          - global:
          -   cattle:
          -     clusterId: c-8kxzb
          -     clusterName: tfm-rancher-monitoring
            rkeControllerManager:
              enabled: true
          - rkeEtcd:
          + 
          + rkeScheduler:
              enabled: true
          + 
            rkeProxy:
              enabled: true
          - rkeScheduler:
          + 
          + rkeEtcd:
              enabled: true
        EOT
        wait                        = false
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Im using Ubuntu 18.04 and Terraform 0.13.4 Are you in the rancher slack? I would be happy if we can look on the problem together via screenshare (tomorrow)

rawmind0 commented 3 years ago

Using cluster distinct than local, i was able to reproduce the issue. Found the issue cause and added a fix at PR #498

ghost commented 3 years ago

That sounds good :) Thanks! Can you may give me an estimate for the new version?

chihaiaalex commented 3 years ago

Same here. Nice to hear that we already have a fix. 👍

rawmind0 commented 3 years ago

Once we release Rancher v2.5.2, we'll merge PR #498 and we'll cut a new provider version

keisari-ch commented 3 years ago

Hi @rawmind0

I'm facing a similar, if not the same, issue with this resource.

I'm always getting changes on a plan/apply though the values file is static.

I'm using a custom rancher-monitoring values.yml, and i'm working a Rancher v2.5.5 HA installation.

$ terraform version
Terraform v0.14.5
+ provider registry.terraform.io/rancher/rancher2 v1.11.0  # was using v1.10.6 before, same behaviour
$ md5sum apps/apps_values/k8s-gke-dev/rancher-monitoring/values.yml
9aa061929b2eeab98d0a907d280103ee  apps/apps_values/k8s-gke-dev/rancher-monitoring/values.yml

$ cat 5_apps.tf
resource "rancher2_app_v2" "dev_monitoring" {
  cluster_id = "c-abcde"
  name = "rancher-monitoring"
  namespace = "cattle-monitoring-system"
  repo_name = "rancher-charts"
  chart_name = "rancher-monitoring"
  chart_version = "9.4.202"
  values = file("apps/apps_values/k8s-gke-dev/rancher-monitoring/values.yml")
}
1st terraform apply

``` Erase is control-H (^H). $ clear $ terraform apply An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # rancher2_app_v2.dev_monitoring will be updated in-place ~ resource "rancher2_app_v2" "dev_monitoring" { id = "c-abcde.cattle-monitoring-system/rancher-monitoring" name = "rancher-monitoring" ~ values = <<-EOT - additionalPrometheusRules: null + prometheus-adapter: + enabled: true + prometheus: + url: http://rancher-monitoring-prometheus.cattle-monitoring-system.svc + port: 9090 + image: + repository: rancher/directxman12-k8s-prometheus-adapter-amd64 + tag: v0.7.0 + pullPolicy: IfNotPresent + pullSecrets: {} + psp: + create: true + rkeControllerManager: + enabled: false + metricsPort: 10252 + component: kube-controller-manager + clients: + port: 10011 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/controlplane: "true" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rkeScheduler: + enabled: false + metricsPort: 10251 + component: kube-scheduler + clients: + port: 10012 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/controlplane: "true" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rkeProxy: + enabled: false + metricsPort: 10249 + component: kube-proxy + clients: + port: 10013 + useLocalhost: true + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rkeEtcd: + enabled: false + metricsPort: 2379 + component: kube-etcd + clients: + port: 10014 + https: + enabled: true + certDir: /etc/kubernetes/ssl + certFile: kube-etcd-*.pem + keyFile: kube-etcd-*-key.pem + caCertFile: kube-ca.pem + nodeSelector: + node-role.kubernetes.io/etcd: "true" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + k3sServer: + enabled: false + metricsPort: 10249 + component: k3s-server + clients: + port: 10013 + useLocalhost: true + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + kubeAdmControllerManager: + enabled: false + metricsPort: 10257 + component: kube-controller-manager + clients: + port: 10011 + useLocalhost: true + https: + enabled: true + useServiceAccountCredentials: true + insecureSkipVerify: true + nodeSelector: + node-role.kubernetes.io/master: "" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + kubeAdmScheduler: + enabled: false + metricsPort: 10259 + component: kube-scheduler + clients: + port: 10012 + useLocalhost: true + https: + enabled: true + useServiceAccountCredentials: true + insecureSkipVerify: true + nodeSelector: + node-role.kubernetes.io/master: "" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + kubeAdmProxy: + enabled: false + metricsPort: 10249 + component: kube-proxy + clients: + port: 10013 + useLocalhost: true + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + kubeAdmEtcd: + enabled: false + metricsPort: 2381 + component: kube-etcd + clients: + port: 10014 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/master: "" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rke2ControllerManager: + enabled: false + metricsPort: 10252 + component: kube-controller-manager + clients: + port: 10011 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/master: "true" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rke2Scheduler: + enabled: false + metricsPort: 10251 + component: kube-scheduler + clients: + port: 10012 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/master: "true" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rke2Proxy: + enabled: false + metricsPort: 10249 + component: kube-proxy + clients: + port: 10013 + useLocalhost: true + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rke2Etcd: + enabled: false + metricsPort: 2381 + component: kube-etcd + clients: + port: 10014 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/etcd: "true" + tolerations: + - effect: NoSchedule + key: node-role.kubernetes.io/master + operator: Equal + nameOverride: rancher-monitoring + namespaceOverride: cattle-monitoring-system + kubeTargetVersionOverride: "" + fullnameOverride: "" + commonLabels: {} + defaultRules: + create: true + rules: + alertmanager: true + etcd: true + general: true + k8s: true + kubeApiserver: true + kubeApiserverAvailability: true + kubeApiserverError: true + kubeApiserverSlos: true + kubelet: true + kubePrometheusGeneral: true + kubePrometheusNodeAlerting: true + kubePrometheusNodeRecording: true + kubernetesAbsent: true + kubernetesApps: true + kubernetesResources: true + kubernetesStorage: true + kubernetesSystem: true + kubeScheduler: true + kubeStateMetrics: true + network: true + node: true + prometheus: true + prometheusOperator: true + time: true + runbookUrl: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md# + appNamespacesTarget: .* + labels: {} + annotations: {} + additionalPrometheusRules: [] + global: + cattle: + systemDefaultRegistry: "" + kubectl: + repository: rancher/kubectl + tag: v1.18.6 + pullPolicy: IfNotPresent + rbac: + create: true + userRoles: + create: true + aggregateToDefaultRoles: true + pspEnabled: true + pspAnnotations: {} + imagePullSecrets: [] alertmanager: - alertmanagerSpec: - additionalPeers: null - affinity: {} - configMaps: null - containers: null - externalUrl: null - image: - repository: rancher/prom-alertmanager - sha: "" - tag: v0.21.0 - listenLocal: false - logFormat: logfmt - logLevel: info - nodeSelector: {} - paused: false - podAntiAffinity: "" - podAntiAffinityTopologyKey: kubernetes.io/hostname - podMetadata: {} - portName: web - priorityClassName: "" - replicas: 1 - resources: - limits: - cpu: 1000m - memory: 500Mi - requests: - cpu: 100m - memory: 100Mi - retention: 120h - routePrefix: / - secrets: null - securityContext: - fsGroup: 2000 - runAsGroup: 2000 - runAsNonRoot: true - runAsUser: 1000 - storage: {} - tolerations: null - useExistingSecret: false + enabled: true apiVersion: v2 + serviceAccount: + create: true + name: "" + annotations: {} + podDisruptionBudget: + enabled: false + minAvailable: 1 + maxUnavailable: "" config: global: resolve_timeout: 5m - receivers: - - name: "null" route: group_by: - job - group_interval: 5m group_wait: 30s - receiver: "null" + group_interval: 5m repeat_interval: 12h + receiver: "null" routes: - match: alertname: Watchdog receiver: "null" + receivers: + - name: "null" templates: - /etc/alertmanager/config/*.tmpl - enabled: true - ingress: - annotations: {} - enabled: false - hosts: null - labels: {} - paths: null - tls: null - ingressPerReplica: - annotations: {} - enabled: false - hostDomain: "" - hostPrefix: "" - labels: {} - paths: null - tlsSecretName: "" - tlsSecretPerReplica: - enabled: false - prefix: alertmanager - podDisruptionBudget: - enabled: false - maxUnavailable: "" - minAvailable: 1 - secret: - annotations: {} - cleanupOnUninstall: false - image: - pullPolicy: IfNotPresent - repository: rancher/rancher-agent - tag: v2.4.8 - securityContext: - runAsNonRoot: true - runAsUser: 1000 - service: - annotations: {} - clusterIP: "" - externalIPs: null - labels: {} - loadBalancerIP: "" - loadBalancerSourceRanges: null - nodePort: 30903 - port: 9093 - targetPort: 9093 - type: ClusterIP - serviceAccount: - annotations: {} - create: true - name: "" - serviceMonitor: - interval: "" - metricRelabelings: null - relabelings: null - selfMonitor: true - servicePerReplica: - annotations: {} - enabled: false - loadBalancerSourceRanges: null - nodePort: 30904 - port: 9093 - targetPort: 9093 - type: ClusterIP + tplConfig: false templateFiles: rancher_defaults.tmpl: |- {{- define "slack.rancher.text" -}} {{ template "rancher.text_multiple" . }} {{- end -}} {{- define "rancher.text_multiple" -}} *[GROUP - Details]* One or more alarms in this group have triggered a notification. {{- if gt (len .GroupLabels.Values) 0 }} *Group Labels:* {{- range .GroupLabels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` {{- end }} {{- end }} {{- if .ExternalURL }} *Link to AlertManager:* {{ .ExternalURL }} {{- end }} {{- range .Alerts }} {{ template "rancher.text_single" . }} {{- end }} {{- end -}} {{- define "rancher.text_single" -}} {{- if .Labels.alertname }} *[ALERT - {{ .Labels.alertname }}]* {{- else }} *[ALERT]* {{- end }} {{- if .Labels.severity }} *Severity:* `{{ .Labels.severity }}` {{- end }} {{- if .Labels.cluster }} *Cluster:* {{ .Labels.cluster }} {{- end }} {{- if .Annotations.summary }} *Summary:* {{ .Annotations.summary }} {{- end }} {{- if .Annotations.message }} *Message:* {{ .Annotations.message }} {{- end }} {{- if .Annotations.description }} *Description:* {{ .Annotations.description }} {{- end }} {{- if .Annotations.runbook_url }} *Runbook URL:* <{{ .Annotations.runbook_url }}|:spiral_note_pad:> {{- end }} {{- with .Labels }} {{- with .Remove (stringSlice "alertname" "severity" "cluster") }} {{- if gt (len .) 0 }} *Additional Labels:* {{- range .SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` {{- end }} {{- end }} {{- end }} {{- end }} {{- with .Annotations }} {{- with .Remove (stringSlice "summary" "message" "description" "runbook_url") }} {{- if gt (len .) 0 }} *Additional Annotations:* {{- range .SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` {{- end }} {{- end }} {{- end }} {{- end }} {{- end -}} - tplConfig: false - commonLabels: {} - coreDns: - enabled: true + ingress: + enabled: false + annotations: {} + labels: {} + hosts: [] + paths: [] + tls: [] + secret: + cleanupOnUninstall: false + image: + repository: rancher/rancher-agent + tag: v2.4.8 + pullPolicy: IfNotPresent + securityContext: + runAsNonRoot: true + runAsUser: 1000 + annotations: {} + ingressPerReplica: + enabled: false + annotations: {} + labels: {} + hostPrefix: "" + hostDomain: "" + paths: [] + tlsSecretName: "" + tlsSecretPerReplica: + enabled: false + prefix: alertmanager service: - port: 9153 - targetPort: 9153 + annotations: {} + labels: {} + clusterIP: "" + port: 9093 + targetPort: 9093 + nodePort: 30903 + externalIPs: [] + loadBalancerIP: "" + loadBalancerSourceRanges: [] + type: ClusterIP + servicePerReplica: + enabled: false + annotations: {} + port: 9093 + targetPort: 9093 + nodePort: 30904 + loadBalancerSourceRanges: [] + type: ClusterIP serviceMonitor: interval: "" - metricRelabelings: null - relabelings: null - defaultRules: - annotations: {} - appNamespacesTarget: .* - create: true - labels: {} - rules: - alertmanager: true - etcd: true - general: true - k8s: true - kubeApiserver: true - kubeApiserverAvailability: true - kubeApiserverError: true - kubeApiserverSlos: true - kubePrometheusGeneral: true - kubePrometheusNodeAlerting: true - kubePrometheusNodeRecording: true - kubeScheduler: true - kubeStateMetrics: true - kubelet: true - kubernetesAbsent: true - kubernetesApps: true - kubernetesResources: true - kubernetesStorage: true - kubernetesSystem: true - network: true - node: true - prometheus: true - prometheusOperator: true - time: true - runbookUrl: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md# - fullnameOverride: "" - global: - cattle: - clusterId: c-abcde - clusterName: k8s-gke-dev - systemDefaultRegistry: "" - imagePullSecrets: null - kubectl: - pullPolicy: IfNotPresent - repository: rancher/kubectl - tag: v1.18.6 - rbac: - create: true - pspAnnotations: {} - pspEnabled: true - userRoles: - aggregateToDefaultRoles: true - create: true + selfMonitor: true + metricRelabelings: [] + relabelings: [] + alertmanagerSpec: + podMetadata: {} + image: + repository: rancher/prom-alertmanager + tag: v0.21.0 + sha: "" + useExistingSecret: false + secrets: [] + configMaps: [] + logFormat: logfmt + logLevel: info + replicas: 1 + retention: 120h + storage: {} + externalUrl: null + routePrefix: / + paused: false + nodeSelector: {} + resources: + limits: + memory: 500Mi + cpu: 1000m + requests: + memory: 100Mi + cpu: 100m + podAntiAffinity: "" + podAntiAffinityTopologyKey: kubernetes.io/hostname + affinity: {} + tolerations: [] + securityContext: + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 1000 + fsGroup: 2000 + listenLocal: false + containers: [] + priorityClassName: "" + additionalPeers: [] + portName: web grafana: - additionalDataSources: null - adminPassword: prom-operator - defaultDashboardsEnabled: true + enabled: true + namespaceOverride: "" + grafana.ini: + users: + auto_assign_org_role: Viewer + auth: + disable_login_form: false + auth.anonymous: + enabled: true + org_role: Viewer + auth.basic: + enabled: false + dashboards: + default_home_dashboard_path: /tmp/dashboards/rancher-default-home.json deploymentStrategy: type: Recreate - enabled: true - extraConfigmapMounts: null - extraContainerVolumes: - - emptyDir: {} - name: nginx-home - - configMap: - items: - - key: nginx.conf - mode: 438 - path: nginx.conf - name: grafana-nginx-proxy-config - name: grafana-nginx + defaultDashboardsEnabled: true + adminPassword: prom-operator + ingress: + enabled: false + annotations: {} + labels: {} + hosts: [] + path: / + tls: [] + sidecar: + dashboards: + enabled: true + label: grafana_dashboard + searchNamespace: cattle-dashboards + annotations: {} + datasources: + enabled: true + defaultDatasourceEnabled: true + annotations: {} + createPrometheusReplicasDatasources: false + label: grafana_datasource + extraConfigmapMounts: [] + additionalDataSources: [] + service: + portName: nginx-http + port: 80 + targetPort: 8080 + nodePort: 30950 + type: ClusterIP + proxy: + image: + repository: rancher/library-nginx + tag: 1.19.2-alpine extraContainers: | - name: grafana-proxy args: - nginx - -g - daemon off; - -c - /nginx/nginx.conf image: "{{ template "system_default_registry" . }}{{ .Values.proxy.image.repository }}:{{ .Values.proxy.image.tag }}" ports: - containerPort: 8080 name: nginx-http protocol: TCP volumeMounts: - mountPath: /nginx name: grafana-nginx - mountPath: /var/cache/nginx name: nginx-home securityContext: runAsUser: 101 runAsGroup: 101 - grafana.ini: - auth: - disable_login_form: false - auth.anonymous: - enabled: true - org_role: Viewer - auth.basic: - enabled: false - dashboards: - default_home_dashboard_path: /tmp/dashboards/rancher-default-home.json - users: - auto_assign_org_role: Viewer - ingress: - annotations: {} - enabled: false - hosts: null - labels: {} - path: / - tls: null - namespaceOverride: "" - proxy: - image: - repository: rancher/library-nginx - tag: 1.19.2-alpine - resources: - limits: - cpu: 200m - memory: 200Mi - requests: - cpu: 100m - memory: 100Mi - service: - nodePort: 30950 - port: 80 - portName: nginx-http - targetPort: 8080 - type: ClusterIP + extraContainerVolumes: + - name: nginx-home + emptyDir: {} + - name: grafana-nginx + configMap: + name: grafana-nginx-proxy-config + items: + - key: nginx.conf + mode: 438 + path: nginx.conf serviceMonitor: interval: "" - metricRelabelings: null - relabelings: null selfMonitor: true - sidecar: - dashboards: - annotations: {} - enabled: true - label: grafana_dashboard - searchNamespace: cattle-dashboards - datasources: - annotations: {} - createPrometheusReplicasDatasources: false - defaultDatasourceEnabled: true - enabled: true - label: grafana_datasource - k3sServer: - clients: - port: 10013 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: k3s-server - enabled: false - metricsPort: 10249 - kube-state-metrics: - namespaceOverride: "" - podSecurityPolicy: - enabled: true - rbac: - create: true + metricRelabelings: [] + relabelings: [] resources: limits: - cpu: 100m memory: 200Mi + cpu: 200m requests: + memory: 100Mi cpu: 100m - memory: 130Mi - kubeAdmControllerManager: - clients: - https: - enabled: true - insecureSkipVerify: true - useServiceAccountCredentials: true - nodeSelector: - node-role.kubernetes.io/master: "" - port: 10011 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-controller-manager - enabled: false - metricsPort: 10257 - kubeAdmEtcd: - clients: - nodeSelector: - node-role.kubernetes.io/master: "" - port: 10014 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-etcd - enabled: false - metricsPort: 2381 - kubeAdmProxy: - clients: - port: 10013 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-proxy - enabled: false - metricsPort: 10249 - kubeAdmScheduler: - clients: - https: - enabled: true - insecureSkipVerify: true - useServiceAccountCredentials: true - nodeSelector: - node-role.kubernetes.io/master: "" - port: 10012 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-scheduler - enabled: false - metricsPort: 10259 kubeApiServer: enabled: true - relabelings: null + tlsConfig: + serverName: kubernetes + insecureSkipVerify: false + relabelings: [] serviceMonitor: interval: "" jobLabel: component - metricRelabelings: null selector: matchLabels: component: apiserver provider: kubernetes - tlsConfig: - insecureSkipVerify: false - serverName: kubernetes + metricRelabelings: [] + kubelet: + enabled: true + namespace: kube-system + serviceMonitor: + interval: "" + https: true + cAdvisor: true + probes: true + resource: true + resourcePath: /metrics/resource/v1alpha1 + cAdvisorMetricRelabelings: [] + probesMetricRelabelings: [] + cAdvisorRelabelings: + - sourceLabels: + - __metrics_path__ + targetLabel: metrics_path + probesRelabelings: + - sourceLabels: + - __metrics_path__ + targetLabel: metrics_path + resourceRelabelings: + - sourceLabels: + - __metrics_path__ + targetLabel: metrics_path + metricRelabelings: [] + relabelings: + - sourceLabels: + - __metrics_path__ + targetLabel: metrics_path kubeControllerManager: enabled: false - endpoints: null + endpoints: [] service: port: 10252 targetPort: 10252 serviceMonitor: + interval: "" https: false insecureSkipVerify: null - interval: "" - metricRelabelings: null - relabelings: null serverName: null + metricRelabelings: [] + relabelings: [] + coreDns: + enabled: true + service: + port: 9153 + targetPort: 9153 + serviceMonitor: + interval: "" + metricRelabelings: [] + relabelings: [] kubeDns: enabled: false service: dnsmasq: port: 10054 targetPort: 10054 skydns: port: 10055 targetPort: 10055 serviceMonitor: - dnsmasqMetricRelabelings: null - dnsmasqRelabelings: null interval: "" - metricRelabelings: null - relabelings: null + metricRelabelings: [] + relabelings: [] + dnsmasqMetricRelabelings: [] + dnsmasqRelabelings: [] kubeEtcd: enabled: false - endpoints: null + endpoints: [] service: port: 2379 targetPort: 2379 serviceMonitor: - caFile: "" - certFile: "" - insecureSkipVerify: false interval: "" - keyFile: "" - metricRelabelings: null - relabelings: null scheme: http + insecureSkipVerify: false serverName: "" - kubeProxy: - enabled: false - endpoints: null - service: - port: 10249 - targetPort: 10249 - serviceMonitor: - https: false - interval: "" - metricRelabelings: null - relabelings: null + caFile: "" + certFile: "" + keyFile: "" + metricRelabelings: [] + relabelings: [] kubeScheduler: enabled: false - endpoints: null + endpoints: [] service: port: 10251 targetPort: 10251 serviceMonitor: + interval: "" https: false insecureSkipVerify: null - interval: "" - metricRelabelings: null - relabelings: null serverName: null - kubeStateMetrics: - enabled: true + metricRelabelings: [] + relabelings: [] + kubeProxy: + enabled: false + endpoints: [] + service: + port: 10249 + targetPort: 10249 serviceMonitor: interval: "" - metricRelabelings: null - relabelings: null - kubeTargetVersionOverride: "" - kubelet: + https: false + metricRelabelings: [] + relabelings: [] + kubeStateMetrics: enabled: true - namespace: kube-system serviceMonitor: - cAdvisor: true - cAdvisorMetricRelabelings: null - cAdvisorRelabelings: - - sourceLabels: - - __metrics_path__ - targetLabel: metrics_path - https: true interval: "" - metricRelabelings: null - probes: true - probesMetricRelabelings: null - probesRelabelings: - - sourceLabels: - - __metrics_path__ - targetLabel: metrics_path - relabelings: - - sourceLabels: - - __metrics_path__ - targetLabel: metrics_path - resource: true - resourcePath: /metrics/resource/v1alpha1 - resourceRelabelings: - - sourceLabels: - - __metrics_path__ - targetLabel: metrics_path - nameOverride: rancher-monitoring - namespaceOverride: cattle-monitoring-system + metricRelabelings: [] + relabelings: [] + kube-state-metrics: + namespaceOverride: "" + rbac: + create: true + podSecurityPolicy: + enabled: true + resources: + limits: + cpu: 100m + memory: 200Mi + requests: + cpu: 100m + memory: 130Mi nodeExporter: enabled: true jobLabel: jobLabel serviceMonitor: interval: "" - metricRelabelings: null - relabelings: null scrapeTimeout: "" - prometheus: - additionalPodMonitors: null - additionalServiceMonitors: null - annotations: {} - enabled: true - ingress: - annotations: {} - enabled: false - hosts: null - labels: {} - paths: null - tls: null - ingressPerReplica: - annotations: {} - enabled: false - hostDomain: "" - hostPrefix: "" - labels: {} - paths: null - tlsSecretName: "" - tlsSecretPerReplica: - enabled: false - prefix: prometheus - podDisruptionBudget: - enabled: false - maxUnavailable: "" - minAvailable: 1 - podSecurityPolicy: - allowedCapabilities: null - prometheusSpec: - additionalAlertManagerConfigs: null - additionalAlertRelabelConfigs: null - additionalPrometheusSecretsAnnotations: {} - additionalScrapeConfigs: null - additionalScrapeConfigsSecret: {} - affinity: {} - alertingEndpoints: null - apiserverConfig: {} - configMaps: null - containers: | - - name: prometheus-proxy - args: - - nginx - - -g - - daemon off; - - -c - - /nginx/nginx.conf - image: "{{ template "system_default_registry" . }}{{ .Values.prometheus.prometheusSpec.proxy.image.repository }}:{{ .Values.prometheus.prometheusSpec.proxy.image.tag }}" - ports: - - containerPort: 8080 - name: nginx-http - protocol: TCP - volumeMounts: - - mountPath: /nginx - name: prometheus-nginx - - mountPath: /var/cache/nginx - name: nginx-home - securityContext: - runAsUser: 101 - runAsGroup: 101 - disableCompaction: false - enableAdminAPI: false - evaluationInterval: "" - externalLabels: {} - externalUrl: "" - ignoreNamespaceSelectors: false - image: - repository: rancher/prom-prometheus - sha: "" - tag: v2.18.2 - initContainers: null - listenLocal: false - logFormat: logfmt - logLevel: info - nodeSelector: {} - paused: false - podAntiAffinity: "" - podAntiAffinityTopologyKey: kubernetes.io/hostname - podMetadata: {} - podMonitorNamespaceSelector: {} - podMonitorSelector: {} - podMonitorSelectorNilUsesHelmValues: false - portName: nginx-http - priorityClassName: "" - prometheusExternalLabelName: "" - prometheusExternalLabelNameClear: false - proxy: - image: - repository: rancher/library-nginx - tag: 1.19.2-alpine - query: {} - remoteRead: null - remoteWrite: null - remoteWriteDashboards: false - replicaExternalLabelName: "" - replicaExternalLabelNameClear: false - replicas: 1 - resources: - limits: - cpu: 1000m - memory: 1500Mi - requests: - cpu: 750m - memory: 750Mi - retention: 10d - retentionSize: "" - routePrefix: / - ruleNamespaceSelector: {} - ruleSelector: {} - ruleSelectorNilUsesHelmValues: false - scrapeInterval: "" - secrets: null - securityContext: - fsGroup: 2000 - runAsGroup: 2000 - runAsNonRoot: true - runAsUser: 1000 - serviceMonitorNamespaceSelector: {} - serviceMonitorSelector: {} - serviceMonitorSelectorNilUsesHelmValues: false - storageSpec: {} - thanos: {} - tolerations: null - volumeMounts: null - volumes: - - emptyDir: {} - name: nginx-home - - configMap: - defaultMode: 438 - name: prometheus-nginx-proxy-config - name: prometheus-nginx - walCompression: false - service: - annotations: {} - clusterIP: "" - externalIPs: null - labels: {} - loadBalancerIP: "" - loadBalancerSourceRanges: null - nodePort: 30090 - port: 9090 - sessionAffinity: "" - targetPort: 8080 - type: ClusterIP - serviceAccount: - create: true - name: "" - serviceMonitor: - bearerTokenFile: null - interval: "" - metricRelabelings: null - relabelings: null - scheme: "" - selfMonitor: true - tlsConfig: {} - servicePerReplica: - annotations: {} - enabled: false - loadBalancerSourceRanges: null - nodePort: 30091 - port: 9090 - targetPort: 9090 - type: ClusterIP - thanosIngress: - annotations: {} - enabled: false - hosts: null - labels: {} - paths: null - servicePort: 10901 - tls: null - prometheus-adapter: - enabled: true - image: - pullPolicy: IfNotPresent - pullSecrets: {} - repository: rancher/directxman12-k8s-prometheus-adapter-amd64 - tag: v0.7.0 - prometheus: - port: 9090 - url: http://rancher-monitoring-prometheus.cattle-monitoring-system.svc - psp: - create: true + metricRelabelings: [] + relabelings: [] prometheus-node-exporter: - extraArgs: - - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/) - - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$ namespaceOverride: "" podLabels: jobLabel: node-exporter + extraArgs: + - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/) + - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$ + service: + port: 9796 + targetPort: 9796 resources: limits: cpu: 200m memory: 50Mi requests: cpu: 100m memory: 30Mi - service: - port: 9796 - targetPort: 9796 prometheusOperator: - admissionWebhooks: + enabled: true + manageCrds: true + tlsProxy: enabled: true + image: + repository: rancher/squareup-ghostunnel + tag: v1.5.2 + sha: "" + pullPolicy: IfNotPresent + resources: {} + admissionWebhooks: failurePolicy: Fail + enabled: true patch: - affinity: {} enabled: true image: - pullPolicy: IfNotPresent repository: rancher/jettech-kube-webhook-certgen - sha: "" tag: v1.2.1 - nodeSelector: {} - podAnnotations: {} - priorityClassName: "" + sha: "" + pullPolicy: IfNotPresent resources: {} - tolerations: null - affinity: {} - cleanupCustomResource: false - configReloaderCpu: 100m - configReloaderMemory: 25Mi - configmapReloadImage: - repository: rancher/jimmidyson-configmap-reload - sha: "" - tag: v0.3.0 + priorityClassName: "" + podAnnotations: {} + nodeSelector: {} + affinity: {} + tolerations: [] + namespaces: {} + denyNamespaces: [] + serviceAccount: + create: true + name: "" + service: + annotations: {} + labels: {} + clusterIP: "" + nodePort: 30080 + nodePortTls: 30443 + additionalPorts: [] + loadBalancerIP: "" + loadBalancerSourceRanges: [] + type: ClusterIP + externalIPs: [] createCustomResource: true - denyNamespaces: null - enabled: true - hostNetwork: false - image: - pullPolicy: IfNotPresent - repository: rancher/coreos-prometheus-operator - sha: "" - tag: v0.38.1 + cleanupCustomResource: false + podLabels: {} + podAnnotations: {} kubeletService: enabled: true namespace: kube-system - manageCrds: true - namespaces: {} - nodeSelector: {} - podAnnotations: {} - podLabels: {} - prometheusConfigReloaderImage: - repository: rancher/coreos-prometheus-config-reloader - sha: "" - tag: v0.38.1 + serviceMonitor: + interval: "" + scrapeTimeout: "" + selfMonitor: true + metricRelabelings: [] + relabelings: [] resources: limits: cpu: 200m memory: 500Mi requests: cpu: 100m memory: 100Mi - secretFieldSelector: "" + hostNetwork: false + nodeSelector: {} + tolerations: [] + affinity: {} securityContext: fsGroup: 65534 runAsGroup: 65534 runAsNonRoot: true runAsUser: 65534 + image: + repository: rancher/coreos-prometheus-operator + tag: v0.38.1 + sha: "" + pullPolicy: IfNotPresent + configmapReloadImage: + repository: rancher/jimmidyson-configmap-reload + tag: v0.3.0 + sha: "" + prometheusConfigReloaderImage: + repository: rancher/coreos-prometheus-config-reloader + tag: v0.38.1 + sha: "" + configReloaderCpu: 100m + configReloaderMemory: 25Mi + secretFieldSelector: "" + prometheus: + enabled: true + annotations: {} + serviceAccount: + create: true + name: "" service: - additionalPorts: null annotations: {} - clusterIP: "" - externalIPs: null labels: {} + clusterIP: "" + port: 9090 + targetPort: 8080 + externalIPs: [] + nodePort: 30090 loadBalancerIP: "" - loadBalancerSourceRanges: null - nodePort: 30080 - nodePortTls: 30443 + loadBalancerSourceRanges: [] type: ClusterIP - serviceAccount: - create: true - name: "" + sessionAffinity: "" + servicePerReplica: + enabled: false + annotations: {} + port: 9090 + targetPort: 9090 + nodePort: 30091 + loadBalancerSourceRanges: [] + type: ClusterIP + podDisruptionBudget: + enabled: false + minAvailable: 1 + maxUnavailable: "" + thanosIngress: + enabled: false + annotations: {} + labels: {} + servicePort: 10901 + hosts: [] + paths: [] + tls: [] + ingress: + enabled: false + annotations: {} + labels: {} + hosts: [] + paths: [] + tls: [] + ingressPerReplica: + enabled: false + annotations: {} + labels: {} + hostPrefix: "" + hostDomain: "" + paths: [] + tlsSecretName: "" + tlsSecretPerReplica: + enabled: false + prefix: prometheus + podSecurityPolicy: + allowedCapabilities: [] serviceMonitor: interval: "" - metricRelabelings: null - relabelings: null - scrapeTimeout: "" selfMonitor: true - tlsProxy: - enabled: true + scheme: "" + tlsConfig: {} + bearerTokenFile: null + metricRelabelings: [] + relabelings: [] + prometheusSpec: + disableCompaction: false + apiserverConfig: {} + scrapeInterval: "" + evaluationInterval: "" + listenLocal: false + enableAdminAPI: false image: - pullPolicy: IfNotPresent - repository: rancher/squareup-ghostunnel + repository: rancher/prom-prometheus + tag: v2.18.2 sha: "" - tag: v1.5.2 - resources: {} - tolerations: null - rke2ControllerManager: - clients: - nodeSelector: - node-role.kubernetes.io/master: "true" - port: 10011 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-controller-manager - enabled: false - metricsPort: 10252 - rke2Etcd: - clients: - nodeSelector: - node-role.kubernetes.io/etcd: "true" - port: 10014 - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/master - operator: Equal - useLocalhost: true - component: kube-etcd - enabled: false - metricsPort: 2381 - rke2Proxy: - clients: - port: 10013 - useLocalhost: true - component: kube-proxy - enabled: false - metricsPort: 10249 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - rke2Scheduler: - clients: - nodeSelector: - node-role.kubernetes.io/master: "true" - port: 10012 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-scheduler - enabled: false - metricsPort: 10251 - rkeControllerManager: - clients: - nodeSelector: - node-role.kubernetes.io/controlplane: "true" - port: 10011 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-controller-manager - enabled: false - metricsPort: 10252 - rkeEtcd: - clients: - https: - caCertFile: kube-ca.pem - certDir: /etc/kubernetes/ssl - certFile: kube-etcd-*.pem - enabled: true - keyFile: kube-etcd-*-key.pem - nodeSelector: - node-role.kubernetes.io/etcd: "true" - port: 10014 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - component: kube-etcd - enabled: false - metricsPort: 2379 - rkeProxy: - clients: - port: 10013 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-proxy - enabled: false - metricsPort: 10249 - rkeScheduler: - clients: - nodeSelector: - node-role.kubernetes.io/controlplane: "true" - port: 10012 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-scheduler - enabled: false - metricsPort: 10251 + tolerations: [] + alertingEndpoints: [] + externalLabels: {} + replicaExternalLabelName: "" + replicaExternalLabelNameClear: false + prometheusExternalLabelName: "" + prometheusExternalLabelNameClear: false + externalUrl: "" + ignoreNamespaceSelectors: false + nodeSelector: {} + secrets: [] + configMaps: [] + query: {} + ruleNamespaceSelector: {} + ruleSelectorNilUsesHelmValues: false + ruleSelector: {} + serviceMonitorSelectorNilUsesHelmValues: false + serviceMonitorSelector: {} + serviceMonitorNamespaceSelector: {} + podMonitorSelectorNilUsesHelmValues: false + podMonitorSelector: {} + podMonitorNamespaceSelector: {} + retention: 10d + retentionSize: "" + walCompression: false + paused: false + replicas: 1 + logLevel: info + logFormat: logfmt + routePrefix: / + podMetadata: {} + podAntiAffinity: "" + podAntiAffinityTopologyKey: kubernetes.io/hostname + affinity: {} + remoteRead: [] + remoteWrite: [] + remoteWriteDashboards: false + resources: + limits: + memory: 1500Mi + cpu: 1000m + requests: + memory: 750Mi + cpu: 750m + storageSpec: {} + additionalScrapeConfigs: [] + additionalScrapeConfigsSecret: {} + additionalPrometheusSecretsAnnotations: {} + additionalAlertManagerConfigs: [] + additionalAlertRelabelConfigs: [] + securityContext: + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 1000 + fsGroup: 2000 + priorityClassName: "" + thanos: {} + proxy: + image: + repository: rancher/library-nginx + tag: 1.19.2-alpine + containers: | + - name: prometheus-proxy + args: + - nginx + - -g + - daemon off; + - -c + - /nginx/nginx.conf + image: "{{ template "system_default_registry" . }}{{ .Values.prometheus.prometheusSpec.proxy.image.repository }}:{{ .Values.prometheus.prometheusSpec.proxy.image.tag }}" + ports: + - containerPort: 8080 + name: nginx-http + protocol: TCP + volumeMounts: + - mountPath: /nginx + name: prometheus-nginx + - mountPath: /var/cache/nginx + name: nginx-home + securityContext: + runAsUser: 101 + runAsGroup: 101 + volumes: + - name: nginx-home + emptyDir: {} + - name: prometheus-nginx + configMap: + name: prometheus-nginx-proxy-config + defaultMode: 438 + volumeMounts: [] + initContainers: [] + portName: nginx-http + additionalServiceMonitors: [] + additionalPodMonitors: [] EOT # (13 unchanged attributes hidden) } Plan: 0 to add, 1 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:yes rancher2_app_v2.dev_monitoring: Modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring] rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 10s elapsed] rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 20s elapsed] rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 30s elapsed] rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 40s elapsed] rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 50s elapsed] rancher2_app_v2.dev_monitoring: Modifications complete after 57s [id=c-abcde.cattle-monitoring-system/rancher-monitoring] Apply complete! Resources: 0 added, 1 changed, 0 destroyed. ```

$ md5sum apps/apps_values/k8s-gke-dev/rancher-monitoring/values.yml
9aa061929b2eeab98d0a907d280103ee  apps/apps_values/k8s-gke-dev/rancher-monitoring/values.yml
2nd terraform apply

``` $ terraform apply An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # rancher2_app_v2.dev_monitoring will be updated in-place ~ resource "rancher2_app_v2" "dev_monitoring" { id = "c-abcde.cattle-monitoring-system/rancher-monitoring" name = "rancher-monitoring" ~ values = <<-EOT - additionalPrometheusRules: null + prometheus-adapter: + enabled: true + prometheus: + url: http://rancher-monitoring-prometheus.cattle-monitoring-system.svc + port: 9090 + image: + repository: rancher/directxman12-k8s-prometheus-adapter-amd64 + tag: v0.7.0 + pullPolicy: IfNotPresent + pullSecrets: {} + psp: + create: true + rkeControllerManager: + enabled: false + metricsPort: 10252 + component: kube-controller-manager + clients: + port: 10011 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/controlplane: "true" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rkeScheduler: + enabled: false + metricsPort: 10251 + component: kube-scheduler + clients: + port: 10012 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/controlplane: "true" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rkeProxy: + enabled: false + metricsPort: 10249 + component: kube-proxy + clients: + port: 10013 + useLocalhost: true + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rkeEtcd: + enabled: false + metricsPort: 2379 + component: kube-etcd + clients: + port: 10014 + https: + enabled: true + certDir: /etc/kubernetes/ssl + certFile: kube-etcd-*.pem + keyFile: kube-etcd-*-key.pem + caCertFile: kube-ca.pem + nodeSelector: + node-role.kubernetes.io/etcd: "true" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + k3sServer: + enabled: false + metricsPort: 10249 + component: k3s-server + clients: + port: 10013 + useLocalhost: true + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + kubeAdmControllerManager: + enabled: false + metricsPort: 10257 + component: kube-controller-manager + clients: + port: 10011 + useLocalhost: true + https: + enabled: true + useServiceAccountCredentials: true + insecureSkipVerify: true + nodeSelector: + node-role.kubernetes.io/master: "" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + kubeAdmScheduler: + enabled: false + metricsPort: 10259 + component: kube-scheduler + clients: + port: 10012 + useLocalhost: true + https: + enabled: true + useServiceAccountCredentials: true + insecureSkipVerify: true + nodeSelector: + node-role.kubernetes.io/master: "" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + kubeAdmProxy: + enabled: false + metricsPort: 10249 + component: kube-proxy + clients: + port: 10013 + useLocalhost: true + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + kubeAdmEtcd: + enabled: false + metricsPort: 2381 + component: kube-etcd + clients: + port: 10014 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/master: "" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rke2ControllerManager: + enabled: false + metricsPort: 10252 + component: kube-controller-manager + clients: + port: 10011 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/master: "true" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rke2Scheduler: + enabled: false + metricsPort: 10251 + component: kube-scheduler + clients: + port: 10012 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/master: "true" + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rke2Proxy: + enabled: false + metricsPort: 10249 + component: kube-proxy + clients: + port: 10013 + useLocalhost: true + tolerations: + - effect: NoExecute + operator: Exists + - effect: NoSchedule + operator: Exists + rke2Etcd: + enabled: false + metricsPort: 2381 + component: kube-etcd + clients: + port: 10014 + useLocalhost: true + nodeSelector: + node-role.kubernetes.io/etcd: "true" + tolerations: + - effect: NoSchedule + key: node-role.kubernetes.io/master + operator: Equal + nameOverride: rancher-monitoring + namespaceOverride: cattle-monitoring-system + kubeTargetVersionOverride: "" + fullnameOverride: "" + commonLabels: {} + defaultRules: + create: true + rules: + alertmanager: true + etcd: true + general: true + k8s: true + kubeApiserver: true + kubeApiserverAvailability: true + kubeApiserverError: true + kubeApiserverSlos: true + kubelet: true + kubePrometheusGeneral: true + kubePrometheusNodeAlerting: true + kubePrometheusNodeRecording: true + kubernetesAbsent: true + kubernetesApps: true + kubernetesResources: true + kubernetesStorage: true + kubernetesSystem: true + kubeScheduler: true + kubeStateMetrics: true + network: true + node: true + prometheus: true + prometheusOperator: true + time: true + runbookUrl: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md# + appNamespacesTarget: .* + labels: {} + annotations: {} + additionalPrometheusRules: [] + global: + cattle: + systemDefaultRegistry: "" + kubectl: + repository: rancher/kubectl + tag: v1.18.6 + pullPolicy: IfNotPresent + rbac: + create: true + userRoles: + create: true + aggregateToDefaultRoles: true + pspEnabled: true + pspAnnotations: {} + imagePullSecrets: [] alertmanager: - alertmanagerSpec: - additionalPeers: null - affinity: {} - configMaps: null - containers: null - externalUrl: null - image: - repository: rancher/prom-alertmanager - sha: "" - tag: v0.21.0 - listenLocal: false - logFormat: logfmt - logLevel: info - nodeSelector: {} - paused: false - podAntiAffinity: "" - podAntiAffinityTopologyKey: kubernetes.io/hostname - podMetadata: {} - portName: web - priorityClassName: "" - replicas: 1 - resources: - limits: - cpu: 1000m - memory: 500Mi - requests: - cpu: 100m - memory: 100Mi - retention: 120h - routePrefix: / - secrets: null - securityContext: - fsGroup: 2000 - runAsGroup: 2000 - runAsNonRoot: true - runAsUser: 1000 - storage: {} - tolerations: null - useExistingSecret: false + enabled: true apiVersion: v2 + serviceAccount: + create: true + name: "" + annotations: {} + podDisruptionBudget: + enabled: false + minAvailable: 1 + maxUnavailable: "" config: global: resolve_timeout: 5m - receivers: - - name: "null" route: group_by: - job - group_interval: 5m group_wait: 30s - receiver: "null" + group_interval: 5m repeat_interval: 12h + receiver: "null" routes: - match: alertname: Watchdog receiver: "null" + receivers: + - name: "null" templates: - /etc/alertmanager/config/*.tmpl - enabled: true - ingress: - annotations: {} - enabled: false - hosts: null - labels: {} - paths: null - tls: null - ingressPerReplica: - annotations: {} - enabled: false - hostDomain: "" - hostPrefix: "" - labels: {} - paths: null - tlsSecretName: "" - tlsSecretPerReplica: - enabled: false - prefix: alertmanager - podDisruptionBudget: - enabled: false - maxUnavailable: "" - minAvailable: 1 - secret: - annotations: {} - cleanupOnUninstall: false - image: - pullPolicy: IfNotPresent - repository: rancher/rancher-agent - tag: v2.4.8 - securityContext: - runAsNonRoot: true - runAsUser: 1000 - service: - annotations: {} - clusterIP: "" - externalIPs: null - labels: {} - loadBalancerIP: "" - loadBalancerSourceRanges: null - nodePort: 30903 - port: 9093 - targetPort: 9093 - type: ClusterIP - serviceAccount: - annotations: {} - create: true - name: "" - serviceMonitor: - interval: "" - metricRelabelings: null - relabelings: null - selfMonitor: true - servicePerReplica: - annotations: {} - enabled: false - loadBalancerSourceRanges: null - nodePort: 30904 - port: 9093 - targetPort: 9093 - type: ClusterIP + tplConfig: false templateFiles: rancher_defaults.tmpl: |- {{- define "slack.rancher.text" -}} {{ template "rancher.text_multiple" . }} {{- end -}} {{- define "rancher.text_multiple" -}} *[GROUP - Details]* One or more alarms in this group have triggered a notification. {{- if gt (len .GroupLabels.Values) 0 }} *Group Labels:* {{- range .GroupLabels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` {{- end }} {{- end }} {{- if .ExternalURL }} *Link to AlertManager:* {{ .ExternalURL }} {{- end }} {{- range .Alerts }} {{ template "rancher.text_single" . }} {{- end }} {{- end -}} {{- define "rancher.text_single" -}} {{- if .Labels.alertname }} *[ALERT - {{ .Labels.alertname }}]* {{- else }} *[ALERT]* {{- end }} {{- if .Labels.severity }} *Severity:* `{{ .Labels.severity }}` {{- end }} {{- if .Labels.cluster }} *Cluster:* {{ .Labels.cluster }} {{- end }} {{- if .Annotations.summary }} *Summary:* {{ .Annotations.summary }} {{- end }} {{- if .Annotations.message }} *Message:* {{ .Annotations.message }} {{- end }} {{- if .Annotations.description }} *Description:* {{ .Annotations.description }} {{- end }} {{- if .Annotations.runbook_url }} *Runbook URL:* <{{ .Annotations.runbook_url }}|:spiral_note_pad:> {{- end }} {{- with .Labels }} {{- with .Remove (stringSlice "alertname" "severity" "cluster") }} {{- if gt (len .) 0 }} *Additional Labels:* {{- range .SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` {{- end }} {{- end }} {{- end }} {{- end }} {{- with .Annotations }} {{- with .Remove (stringSlice "summary" "message" "description" "runbook_url") }} {{- if gt (len .) 0 }} *Additional Annotations:* {{- range .SortedPairs }} • *{{ .Name }}:* `{{ .Value }}` {{- end }} {{- end }} {{- end }} {{- end }} {{- end -}} - tplConfig: false - commonLabels: {} - coreDns: - enabled: true + ingress: + enabled: false + annotations: {} + labels: {} + hosts: [] + paths: [] + tls: [] + secret: + cleanupOnUninstall: false + image: + repository: rancher/rancher-agent + tag: v2.4.8 + pullPolicy: IfNotPresent + securityContext: + runAsNonRoot: true + runAsUser: 1000 + annotations: {} + ingressPerReplica: + enabled: false + annotations: {} + labels: {} + hostPrefix: "" + hostDomain: "" + paths: [] + tlsSecretName: "" + tlsSecretPerReplica: + enabled: false + prefix: alertmanager service: - port: 9153 - targetPort: 9153 + annotations: {} + labels: {} + clusterIP: "" + port: 9093 + targetPort: 9093 + nodePort: 30903 + externalIPs: [] + loadBalancerIP: "" + loadBalancerSourceRanges: [] + type: ClusterIP + servicePerReplica: + enabled: false + annotations: {} + port: 9093 + targetPort: 9093 + nodePort: 30904 + loadBalancerSourceRanges: [] + type: ClusterIP serviceMonitor: interval: "" - metricRelabelings: null - relabelings: null - defaultRules: - annotations: {} - appNamespacesTarget: .* - create: true - labels: {} - rules: - alertmanager: true - etcd: true - general: true - k8s: true - kubeApiserver: true - kubeApiserverAvailability: true - kubeApiserverError: true - kubeApiserverSlos: true - kubePrometheusGeneral: true - kubePrometheusNodeAlerting: true - kubePrometheusNodeRecording: true - kubeScheduler: true - kubeStateMetrics: true - kubelet: true - kubernetesAbsent: true - kubernetesApps: true - kubernetesResources: true - kubernetesStorage: true - kubernetesSystem: true - network: true - node: true - prometheus: true - prometheusOperator: true - time: true - runbookUrl: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md# - fullnameOverride: "" - global: - cattle: - clusterId: c-abcde - clusterName: k8s-gke-dev - systemDefaultRegistry: "" - imagePullSecrets: null - kubectl: - pullPolicy: IfNotPresent - repository: rancher/kubectl - tag: v1.18.6 - rbac: - create: true - pspAnnotations: {} - pspEnabled: true - userRoles: - aggregateToDefaultRoles: true - create: true + selfMonitor: true + metricRelabelings: [] + relabelings: [] + alertmanagerSpec: + podMetadata: {} + image: + repository: rancher/prom-alertmanager + tag: v0.21.0 + sha: "" + useExistingSecret: false + secrets: [] + configMaps: [] + logFormat: logfmt + logLevel: info + replicas: 1 + retention: 120h + storage: {} + externalUrl: null + routePrefix: / + paused: false + nodeSelector: {} + resources: + limits: + memory: 500Mi + cpu: 1000m + requests: + memory: 100Mi + cpu: 100m + podAntiAffinity: "" + podAntiAffinityTopologyKey: kubernetes.io/hostname + affinity: {} + tolerations: [] + securityContext: + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 1000 + fsGroup: 2000 + listenLocal: false + containers: [] + priorityClassName: "" + additionalPeers: [] + portName: web grafana: - additionalDataSources: null - adminPassword: prom-operator - defaultDashboardsEnabled: true + enabled: true + namespaceOverride: "" + grafana.ini: + users: + auto_assign_org_role: Viewer + auth: + disable_login_form: false + auth.anonymous: + enabled: true + org_role: Viewer + auth.basic: + enabled: false + dashboards: + default_home_dashboard_path: /tmp/dashboards/rancher-default-home.json deploymentStrategy: type: Recreate - enabled: true - extraConfigmapMounts: null - extraContainerVolumes: - - emptyDir: {} - name: nginx-home - - configMap: - items: - - key: nginx.conf - mode: 438 - path: nginx.conf - name: grafana-nginx-proxy-config - name: grafana-nginx + defaultDashboardsEnabled: true + adminPassword: prom-operator + ingress: + enabled: false + annotations: {} + labels: {} + hosts: [] + path: / + tls: [] + sidecar: + dashboards: + enabled: true + label: grafana_dashboard + searchNamespace: cattle-dashboards + annotations: {} + datasources: + enabled: true + defaultDatasourceEnabled: true + annotations: {} + createPrometheusReplicasDatasources: false + label: grafana_datasource + extraConfigmapMounts: [] + additionalDataSources: [] + service: + portName: nginx-http + port: 80 + targetPort: 8080 + nodePort: 30950 + type: ClusterIP + proxy: + image: + repository: rancher/library-nginx + tag: 1.19.2-alpine extraContainers: | - name: grafana-proxy args: - nginx - -g - daemon off; - -c - /nginx/nginx.conf image: "{{ template "system_default_registry" . }}{{ .Values.proxy.image.repository }}:{{ .Values.proxy.image.tag }}" ports: - containerPort: 8080 name: nginx-http protocol: TCP volumeMounts: - mountPath: /nginx name: grafana-nginx - mountPath: /var/cache/nginx name: nginx-home securityContext: runAsUser: 101 runAsGroup: 101 - grafana.ini: - auth: - disable_login_form: false - auth.anonymous: - enabled: true - org_role: Viewer - auth.basic: - enabled: false - dashboards: - default_home_dashboard_path: /tmp/dashboards/rancher-default-home.json - users: - auto_assign_org_role: Viewer - ingress: - annotations: {} - enabled: false - hosts: null - labels: {} - path: / - tls: null - namespaceOverride: "" - proxy: - image: - repository: rancher/library-nginx - tag: 1.19.2-alpine - resources: - limits: - cpu: 200m - memory: 200Mi - requests: - cpu: 100m - memory: 100Mi - service: - nodePort: 30950 - port: 80 - portName: nginx-http - targetPort: 8080 - type: ClusterIP + extraContainerVolumes: + - name: nginx-home + emptyDir: {} + - name: grafana-nginx + configMap: + name: grafana-nginx-proxy-config + items: + - key: nginx.conf + mode: 438 + path: nginx.conf serviceMonitor: interval: "" - metricRelabelings: null - relabelings: null selfMonitor: true - sidecar: - dashboards: - annotations: {} - enabled: true - label: grafana_dashboard - searchNamespace: cattle-dashboards - datasources: - annotations: {} - createPrometheusReplicasDatasources: false - defaultDatasourceEnabled: true - enabled: true - label: grafana_datasource - k3sServer: - clients: - port: 10013 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: k3s-server - enabled: false - metricsPort: 10249 - kube-state-metrics: - namespaceOverride: "" - podSecurityPolicy: - enabled: true - rbac: - create: true + metricRelabelings: [] + relabelings: [] resources: limits: - cpu: 100m memory: 200Mi + cpu: 200m requests: + memory: 100Mi cpu: 100m - memory: 130Mi - kubeAdmControllerManager: - clients: - https: - enabled: true - insecureSkipVerify: true - useServiceAccountCredentials: true - nodeSelector: - node-role.kubernetes.io/master: "" - port: 10011 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-controller-manager - enabled: false - metricsPort: 10257 - kubeAdmEtcd: - clients: - nodeSelector: - node-role.kubernetes.io/master: "" - port: 10014 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-etcd - enabled: false - metricsPort: 2381 - kubeAdmProxy: - clients: - port: 10013 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-proxy - enabled: false - metricsPort: 10249 - kubeAdmScheduler: - clients: - https: - enabled: true - insecureSkipVerify: true - useServiceAccountCredentials: true - nodeSelector: - node-role.kubernetes.io/master: "" - port: 10012 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-scheduler - enabled: false - metricsPort: 10259 kubeApiServer: enabled: true - relabelings: null + tlsConfig: + serverName: kubernetes + insecureSkipVerify: false + relabelings: [] serviceMonitor: interval: "" jobLabel: component - metricRelabelings: null selector: matchLabels: component: apiserver provider: kubernetes - tlsConfig: - insecureSkipVerify: false - serverName: kubernetes + metricRelabelings: [] + kubelet: + enabled: true + namespace: kube-system + serviceMonitor: + interval: "" + https: true + cAdvisor: true + probes: true + resource: true + resourcePath: /metrics/resource/v1alpha1 + cAdvisorMetricRelabelings: [] + probesMetricRelabelings: [] + cAdvisorRelabelings: + - sourceLabels: + - __metrics_path__ + targetLabel: metrics_path + probesRelabelings: + - sourceLabels: + - __metrics_path__ + targetLabel: metrics_path + resourceRelabelings: + - sourceLabels: + - __metrics_path__ + targetLabel: metrics_path + metricRelabelings: [] + relabelings: + - sourceLabels: + - __metrics_path__ + targetLabel: metrics_path kubeControllerManager: enabled: false - endpoints: null + endpoints: [] service: port: 10252 targetPort: 10252 serviceMonitor: + interval: "" https: false insecureSkipVerify: null - interval: "" - metricRelabelings: null - relabelings: null serverName: null + metricRelabelings: [] + relabelings: [] + coreDns: + enabled: true + service: + port: 9153 + targetPort: 9153 + serviceMonitor: + interval: "" + metricRelabelings: [] + relabelings: [] kubeDns: enabled: false service: dnsmasq: port: 10054 targetPort: 10054 skydns: port: 10055 targetPort: 10055 serviceMonitor: - dnsmasqMetricRelabelings: null - dnsmasqRelabelings: null interval: "" - metricRelabelings: null - relabelings: null + metricRelabelings: [] + relabelings: [] + dnsmasqMetricRelabelings: [] + dnsmasqRelabelings: [] kubeEtcd: enabled: false - endpoints: null + endpoints: [] service: port: 2379 targetPort: 2379 serviceMonitor: - caFile: "" - certFile: "" - insecureSkipVerify: false interval: "" - keyFile: "" - metricRelabelings: null - relabelings: null scheme: http + insecureSkipVerify: false serverName: "" - kubeProxy: - enabled: false - endpoints: null - service: - port: 10249 - targetPort: 10249 - serviceMonitor: - https: false - interval: "" - metricRelabelings: null - relabelings: null + caFile: "" + certFile: "" + keyFile: "" + metricRelabelings: [] + relabelings: [] kubeScheduler: enabled: false - endpoints: null + endpoints: [] service: port: 10251 targetPort: 10251 serviceMonitor: + interval: "" https: false insecureSkipVerify: null - interval: "" - metricRelabelings: null - relabelings: null serverName: null - kubeStateMetrics: - enabled: true + metricRelabelings: [] + relabelings: [] + kubeProxy: + enabled: false + endpoints: [] + service: + port: 10249 + targetPort: 10249 serviceMonitor: interval: "" - metricRelabelings: null - relabelings: null - kubeTargetVersionOverride: "" - kubelet: + https: false + metricRelabelings: [] + relabelings: [] + kubeStateMetrics: enabled: true - namespace: kube-system serviceMonitor: - cAdvisor: true - cAdvisorMetricRelabelings: null - cAdvisorRelabelings: - - sourceLabels: - - __metrics_path__ - targetLabel: metrics_path - https: true interval: "" - metricRelabelings: null - probes: true - probesMetricRelabelings: null - probesRelabelings: - - sourceLabels: - - __metrics_path__ - targetLabel: metrics_path - relabelings: - - sourceLabels: - - __metrics_path__ - targetLabel: metrics_path - resource: true - resourcePath: /metrics/resource/v1alpha1 - resourceRelabelings: - - sourceLabels: - - __metrics_path__ - targetLabel: metrics_path - nameOverride: rancher-monitoring - namespaceOverride: cattle-monitoring-system + metricRelabelings: [] + relabelings: [] + kube-state-metrics: + namespaceOverride: "" + rbac: + create: true + podSecurityPolicy: + enabled: true + resources: + limits: + cpu: 100m + memory: 200Mi + requests: + cpu: 100m + memory: 130Mi nodeExporter: enabled: true jobLabel: jobLabel serviceMonitor: interval: "" - metricRelabelings: null - relabelings: null scrapeTimeout: "" - prometheus: - additionalPodMonitors: null - additionalServiceMonitors: null - annotations: {} - enabled: true - ingress: - annotations: {} - enabled: false - hosts: null - labels: {} - paths: null - tls: null - ingressPerReplica: - annotations: {} - enabled: false - hostDomain: "" - hostPrefix: "" - labels: {} - paths: null - tlsSecretName: "" - tlsSecretPerReplica: - enabled: false - prefix: prometheus - podDisruptionBudget: - enabled: false - maxUnavailable: "" - minAvailable: 1 - podSecurityPolicy: - allowedCapabilities: null - prometheusSpec: - additionalAlertManagerConfigs: null - additionalAlertRelabelConfigs: null - additionalPrometheusSecretsAnnotations: {} - additionalScrapeConfigs: null - additionalScrapeConfigsSecret: {} - affinity: {} - alertingEndpoints: null - apiserverConfig: {} - configMaps: null - containers: | - - name: prometheus-proxy - args: - - nginx - - -g - - daemon off; - - -c - - /nginx/nginx.conf - image: "{{ template "system_default_registry" . }}{{ .Values.prometheus.prometheusSpec.proxy.image.repository }}:{{ .Values.prometheus.prometheusSpec.proxy.image.tag }}" - ports: - - containerPort: 8080 - name: nginx-http - protocol: TCP - volumeMounts: - - mountPath: /nginx - name: prometheus-nginx - - mountPath: /var/cache/nginx - name: nginx-home - securityContext: - runAsUser: 101 - runAsGroup: 101 - disableCompaction: false - enableAdminAPI: false - evaluationInterval: "" - externalLabels: {} - externalUrl: "" - ignoreNamespaceSelectors: false - image: - repository: rancher/prom-prometheus - sha: "" - tag: v2.18.2 - initContainers: null - listenLocal: false - logFormat: logfmt - logLevel: info - nodeSelector: {} - paused: false - podAntiAffinity: "" - podAntiAffinityTopologyKey: kubernetes.io/hostname - podMetadata: {} - podMonitorNamespaceSelector: {} - podMonitorSelector: {} - podMonitorSelectorNilUsesHelmValues: false - portName: nginx-http - priorityClassName: "" - prometheusExternalLabelName: "" - prometheusExternalLabelNameClear: false - proxy: - image: - repository: rancher/library-nginx - tag: 1.19.2-alpine - query: {} - remoteRead: null - remoteWrite: null - remoteWriteDashboards: false - replicaExternalLabelName: "" - replicaExternalLabelNameClear: false - replicas: 1 - resources: - limits: - cpu: 1000m - memory: 1500Mi - requests: - cpu: 750m - memory: 750Mi - retention: 10d - retentionSize: "" - routePrefix: / - ruleNamespaceSelector: {} - ruleSelector: {} - ruleSelectorNilUsesHelmValues: false - scrapeInterval: "" - secrets: null - securityContext: - fsGroup: 2000 - runAsGroup: 2000 - runAsNonRoot: true - runAsUser: 1000 - serviceMonitorNamespaceSelector: {} - serviceMonitorSelector: {} - serviceMonitorSelectorNilUsesHelmValues: false - storageSpec: {} - thanos: {} - tolerations: null - volumeMounts: null - volumes: - - emptyDir: {} - name: nginx-home - - configMap: - defaultMode: 438 - name: prometheus-nginx-proxy-config - name: prometheus-nginx - walCompression: false - service: - annotations: {} - clusterIP: "" - externalIPs: null - labels: {} - loadBalancerIP: "" - loadBalancerSourceRanges: null - nodePort: 30090 - port: 9090 - sessionAffinity: "" - targetPort: 8080 - type: ClusterIP - serviceAccount: - create: true - name: "" - serviceMonitor: - bearerTokenFile: null - interval: "" - metricRelabelings: null - relabelings: null - scheme: "" - selfMonitor: true - tlsConfig: {} - servicePerReplica: - annotations: {} - enabled: false - loadBalancerSourceRanges: null - nodePort: 30091 - port: 9090 - targetPort: 9090 - type: ClusterIP - thanosIngress: - annotations: {} - enabled: false - hosts: null - labels: {} - paths: null - servicePort: 10901 - tls: null - prometheus-adapter: - enabled: true - image: - pullPolicy: IfNotPresent - pullSecrets: {} - repository: rancher/directxman12-k8s-prometheus-adapter-amd64 - tag: v0.7.0 - prometheus: - port: 9090 - url: http://rancher-monitoring-prometheus.cattle-monitoring-system.svc - psp: - create: true + metricRelabelings: [] + relabelings: [] prometheus-node-exporter: - extraArgs: - - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/) - - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$ namespaceOverride: "" podLabels: jobLabel: node-exporter + extraArgs: + - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/) + - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$ + service: + port: 9796 + targetPort: 9796 resources: limits: cpu: 200m memory: 50Mi requests: cpu: 100m memory: 30Mi - service: - port: 9796 - targetPort: 9796 prometheusOperator: - admissionWebhooks: + enabled: true + manageCrds: true + tlsProxy: enabled: true + image: + repository: rancher/squareup-ghostunnel + tag: v1.5.2 + sha: "" + pullPolicy: IfNotPresent + resources: {} + admissionWebhooks: failurePolicy: Fail + enabled: true patch: - affinity: {} enabled: true image: - pullPolicy: IfNotPresent repository: rancher/jettech-kube-webhook-certgen - sha: "" tag: v1.2.1 - nodeSelector: {} - podAnnotations: {} - priorityClassName: "" + sha: "" + pullPolicy: IfNotPresent resources: {} - tolerations: null - affinity: {} - cleanupCustomResource: false - configReloaderCpu: 100m - configReloaderMemory: 25Mi - configmapReloadImage: - repository: rancher/jimmidyson-configmap-reload - sha: "" - tag: v0.3.0 + priorityClassName: "" + podAnnotations: {} + nodeSelector: {} + affinity: {} + tolerations: [] + namespaces: {} + denyNamespaces: [] + serviceAccount: + create: true + name: "" + service: + annotations: {} + labels: {} + clusterIP: "" + nodePort: 30080 + nodePortTls: 30443 + additionalPorts: [] + loadBalancerIP: "" + loadBalancerSourceRanges: [] + type: ClusterIP + externalIPs: [] createCustomResource: true - denyNamespaces: null - enabled: true - hostNetwork: false - image: - pullPolicy: IfNotPresent - repository: rancher/coreos-prometheus-operator - sha: "" - tag: v0.38.1 + cleanupCustomResource: false + podLabels: {} + podAnnotations: {} kubeletService: enabled: true namespace: kube-system - manageCrds: true - namespaces: {} - nodeSelector: {} - podAnnotations: {} - podLabels: {} - prometheusConfigReloaderImage: - repository: rancher/coreos-prometheus-config-reloader - sha: "" - tag: v0.38.1 + serviceMonitor: + interval: "" + scrapeTimeout: "" + selfMonitor: true + metricRelabelings: [] + relabelings: [] resources: limits: cpu: 200m memory: 500Mi requests: cpu: 100m memory: 100Mi - secretFieldSelector: "" + hostNetwork: false + nodeSelector: {} + tolerations: [] + affinity: {} securityContext: fsGroup: 65534 runAsGroup: 65534 runAsNonRoot: true runAsUser: 65534 + image: + repository: rancher/coreos-prometheus-operator + tag: v0.38.1 + sha: "" + pullPolicy: IfNotPresent + configmapReloadImage: + repository: rancher/jimmidyson-configmap-reload + tag: v0.3.0 + sha: "" + prometheusConfigReloaderImage: + repository: rancher/coreos-prometheus-config-reloader + tag: v0.38.1 + sha: "" + configReloaderCpu: 100m + configReloaderMemory: 25Mi + secretFieldSelector: "" + prometheus: + enabled: true + annotations: {} + serviceAccount: + create: true + name: "" service: - additionalPorts: null annotations: {} - clusterIP: "" - externalIPs: null labels: {} + clusterIP: "" + port: 9090 + targetPort: 8080 + externalIPs: [] + nodePort: 30090 loadBalancerIP: "" - loadBalancerSourceRanges: null - nodePort: 30080 - nodePortTls: 30443 + loadBalancerSourceRanges: [] type: ClusterIP - serviceAccount: - create: true - name: "" + sessionAffinity: "" + servicePerReplica: + enabled: false + annotations: {} + port: 9090 + targetPort: 9090 + nodePort: 30091 + loadBalancerSourceRanges: [] + type: ClusterIP + podDisruptionBudget: + enabled: false + minAvailable: 1 + maxUnavailable: "" + thanosIngress: + enabled: false + annotations: {} + labels: {} + servicePort: 10901 + hosts: [] + paths: [] + tls: [] + ingress: + enabled: false + annotations: {} + labels: {} + hosts: [] + paths: [] + tls: [] + ingressPerReplica: + enabled: false + annotations: {} + labels: {} + hostPrefix: "" + hostDomain: "" + paths: [] + tlsSecretName: "" + tlsSecretPerReplica: + enabled: false + prefix: prometheus + podSecurityPolicy: + allowedCapabilities: [] serviceMonitor: interval: "" - metricRelabelings: null - relabelings: null - scrapeTimeout: "" selfMonitor: true - tlsProxy: - enabled: true + scheme: "" + tlsConfig: {} + bearerTokenFile: null + metricRelabelings: [] + relabelings: [] + prometheusSpec: + disableCompaction: false + apiserverConfig: {} + scrapeInterval: "" + evaluationInterval: "" + listenLocal: false + enableAdminAPI: false image: - pullPolicy: IfNotPresent - repository: rancher/squareup-ghostunnel + repository: rancher/prom-prometheus + tag: v2.18.2 sha: "" - tag: v1.5.2 - resources: {} - tolerations: null - rke2ControllerManager: - clients: - nodeSelector: - node-role.kubernetes.io/master: "true" - port: 10011 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-controller-manager - enabled: false - metricsPort: 10252 - rke2Etcd: - clients: - nodeSelector: - node-role.kubernetes.io/etcd: "true" - port: 10014 - tolerations: - - effect: NoSchedule - key: node-role.kubernetes.io/master - operator: Equal - useLocalhost: true - component: kube-etcd - enabled: false - metricsPort: 2381 - rke2Proxy: - clients: - port: 10013 - useLocalhost: true - component: kube-proxy - enabled: false - metricsPort: 10249 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - rke2Scheduler: - clients: - nodeSelector: - node-role.kubernetes.io/master: "true" - port: 10012 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-scheduler - enabled: false - metricsPort: 10251 - rkeControllerManager: - clients: - nodeSelector: - node-role.kubernetes.io/controlplane: "true" - port: 10011 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-controller-manager - enabled: false - metricsPort: 10252 - rkeEtcd: - clients: - https: - caCertFile: kube-ca.pem - certDir: /etc/kubernetes/ssl - certFile: kube-etcd-*.pem - enabled: true - keyFile: kube-etcd-*-key.pem - nodeSelector: - node-role.kubernetes.io/etcd: "true" - port: 10014 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - component: kube-etcd - enabled: false - metricsPort: 2379 - rkeProxy: - clients: - port: 10013 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-proxy - enabled: false - metricsPort: 10249 - rkeScheduler: - clients: - nodeSelector: - node-role.kubernetes.io/controlplane: "true" - port: 10012 - tolerations: - - effect: NoExecute - operator: Exists - - effect: NoSchedule - operator: Exists - useLocalhost: true - component: kube-scheduler - enabled: false - metricsPort: 10251 + tolerations: [] + alertingEndpoints: [] + externalLabels: {} + replicaExternalLabelName: "" + replicaExternalLabelNameClear: false + prometheusExternalLabelName: "" + prometheusExternalLabelNameClear: false + externalUrl: "" + ignoreNamespaceSelectors: false + nodeSelector: {} + secrets: [] + configMaps: [] + query: {} + ruleNamespaceSelector: {} + ruleSelectorNilUsesHelmValues: false + ruleSelector: {} + serviceMonitorSelectorNilUsesHelmValues: false + serviceMonitorSelector: {} + serviceMonitorNamespaceSelector: {} + podMonitorSelectorNilUsesHelmValues: false + podMonitorSelector: {} + podMonitorNamespaceSelector: {} + retention: 10d + retentionSize: "" + walCompression: false + paused: false + replicas: 1 + logLevel: info + logFormat: logfmt + routePrefix: / + podMetadata: {} + podAntiAffinity: "" + podAntiAffinityTopologyKey: kubernetes.io/hostname + affinity: {} + remoteRead: [] + remoteWrite: [] + remoteWriteDashboards: false + resources: + limits: + memory: 1500Mi + cpu: 1000m + requests: + memory: 750Mi + cpu: 750m + storageSpec: {} + additionalScrapeConfigs: [] + additionalScrapeConfigsSecret: {} + additionalPrometheusSecretsAnnotations: {} + additionalAlertManagerConfigs: [] + additionalAlertRelabelConfigs: [] + securityContext: + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 1000 + fsGroup: 2000 + priorityClassName: "" + thanos: {} + proxy: + image: + repository: rancher/library-nginx + tag: 1.19.2-alpine + containers: | + - name: prometheus-proxy + args: + - nginx + - -g + - daemon off; + - -c + - /nginx/nginx.conf + image: "{{ template "system_default_registry" . }}{{ .Values.prometheus.prometheusSpec.proxy.image.repository }}:{{ .Values.prometheus.prometheusSpec.proxy.image.tag }}" + ports: + - containerPort: 8080 + name: nginx-http + protocol: TCP + volumeMounts: + - mountPath: /nginx + name: prometheus-nginx + - mountPath: /var/cache/nginx + name: nginx-home + securityContext: + runAsUser: 101 + runAsGroup: 101 + volumes: + - name: nginx-home + emptyDir: {} + - name: prometheus-nginx + configMap: + name: prometheus-nginx-proxy-config + defaultMode: 438 + volumeMounts: [] + initContainers: [] + portName: nginx-http + additionalServiceMonitors: [] + additionalPodMonitors: [] EOT # (13 unchanged attributes hidden) } Plan: 0 to add, 1 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes rancher2_app_v2.dev_monitoring: Modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring] rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 10s elapsed] rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 20s elapsed] rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 30s elapsed] rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 40s elapsed] rancher2_app_v2.dev_monitoring: Still modifying... [id=c-abcde.cattle-monitoring-system/rancher-monitoring, 50s elapsed] rancher2_app_v2.dev_monitoring: Modifications complete after 57s [id=c-abcde.cattle-monitoring-system/rancher-monitoring] Apply complete! Resources: 0 added, 1 changed, 0 destroyed. ```

I cant explain the behaviour in my side. Is the values attribute really intended to provide a values.yml replacement ? is it for yaml overlay (only the values we want changed from the original one) ?