operator-framework / operator-sdk

SDK for building Kubernetes applications. Provides high level APIs, useful abstractions, and project scaffolding.
https://sdk.operatorframework.io
Apache License 2.0
7.19k stars 1.74k forks source link

Helm Operator doesn't create resources for Kube Prometheus Stack #4733

Closed kasia-kujawa closed 3 years ago

kasia-kujawa commented 3 years ago

Bug Report

What did you do?

Generated Helm Operator for kube-prometheus-stack using operator SDK from master branch (commit 508255228591ac9b2b4f45034fb9329cbfe1ef22) to test fix for https://github.com/operator-framework/operator-sdk/issues/4636

$ operator-sdk init --plugins helm --helm-chart prometheus-community/kube-prometheus-stack --domain example.com --group helm-chart --version v1 --kind Promhelm
Writing scaffold for you to edit...
Creating the API:
$ operator-sdk create api --group helm-chart --version v1 --kind Promhelm --helm-chart prometheus-community/kube-prometheus-stack
Created helm-charts/kube-prometheus-stack
Generating RBAC rules
WARN[0006] The RBAC rules generated in config/rbac/role.yaml are based on the chart's default manifest. Some rules may be missing for resources that are only enabled with custom values, and some existing rules may be overly broad. Double check the rules generated in config/rbac/role.yaml to ensure they meet the operator's permission requirements. 

Modified Dockerfile to use image for master branch:

# Build the manager binary
FROM quay.io/operator-framework/helm-operator:master

ENV HOME=/opt/helm
COPY watches.yaml ${HOME}/watches.yaml
COPY helm-charts  ${HOME}/helm-charts
WORKDIR ${HOME}

Set --zap-log-level=debug in config/default/manager_auth_proxy_patch.yaml:

# This patch inject a sidecar container which is a HTTP proxy for the
# controller manager, it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: controller-manager
  namespace: system
spec:
  template:
    spec:
      containers:
      - name: kube-rbac-proxy
        image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
        args:
        - "--secure-listen-address=0.0.0.0:8443"
        - "--upstream=http://127.0.0.1:8080/"
        - "--logtostderr=true"
        - "--v=10"
        ports:
        - containerPort: 8443
          name: https
      - name: manager
        args:
        - "--health-probe-bind-address=:8081"
        - "--metrics-bind-address=127.0.0.1:8080"
        - "--leader-elect"
        - "--leader-election-id=kube-prometheus-stack"
        - "--zap-log-level=debug"
make docker-build IMG="localhost:32000/prom-operator:master"
make docker-push IMG="localhost:32000/prom-operator:master"
make deploy IMG="localhost:32000/prom-operator:master"
$ kubectl get pods -n kube-prometheus-stack-system 
NAME                                                        READY   STATUS    RESTARTS   AGE
kube-prometheus-stack-controller-manager-6789dcd479-nmqjf   2/2     Running   0          3m1s
$ kubectl apply -f config/samples/helm-chart_v1_promhelm.yaml
promhelm.helm-chart.example.com/promhelm-sample created
$ kubectl get pods -n kube-prometheus-stack-system 
NAME                                                        READY   STATUS    RESTARTS   AGE
kube-prometheus-stack-controller-manager-6789dcd479-nmqjf   2/2     Running   1          5m1s
$ kubectl logs -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-6789dcd479-nmqjf manager -p
{"level":"info","ts":1617873971.0132182,"logger":"cmd","msg":"Version","Go Version":"go1.15.10","GOOS":"linux","GOARCH":"amd64","helm-operator":"scorecard-kuttl/v2.0.0+git","commit":"6b6c021400894281b0d35033e83b5671e4c5f1ac"}
{"level":"info","ts":1617873971.0135608,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1617873971.6110563,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1617873971.6118536,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617873971.6119115,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617873971.6119232,"logger":"helm.controller","msg":"Watching resource","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","namespace":"","reconcilePeriod":"1m0s"}
I0408 09:26:11.612256       1 leaderelection.go:243] attempting to acquire leader lease kube-prometheus-stack-system/kube-prometheus-stack...
{"level":"info","ts":1617873971.6124713,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0408 09:26:29.014673       1 leaderelection.go:253] successfully acquired lease kube-prometheus-stack-system/kube-prometheus-stack
{"level":"debug","ts":1617873989.0147357,"logger":"controller-runtime.manager.events","msg":"Normal","object":{"kind":"ConfigMap","namespace":"kube-prometheus-stack-system","name":"kube-prometheus-stack","uid":"017e4b1f-54c4-4467-b03f-bf565d53ee5c","apiVersion":"v1","resourceVersion":"40132"},"reason":"LeaderElection","message":"kube-prometheus-stack-controller-manager-6789dcd479-nmqjf_f3f32247-4aa7-4e88-9dfa-3af70c809469 became leader"}
{"level":"info","ts":1617873989.0148664,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: helm-chart.example.com/v1, Kind=Promhelm"}
{"level":"info","ts":1617873989.115548,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting Controller"}
{"level":"info","ts":1617873989.1155875,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting workers","worker count":4}
{"level":"debug","ts":1617873989.1157281,"logger":"helm.controller","msg":"Reconciling","namespace":"default","name":"promhelm-sample","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm"}
$ kubectl logs -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-6789dcd479-nmqjf manager 
{"level":"info","ts":1617873971.0132182,"logger":"cmd","msg":"Version","Go Version":"go1.15.10","GOOS":"linux","GOARCH":"amd64","helm-operator":"scorecard-kuttl/v2.0.0+git","commit":"6b6c021400894281b0d35033e83b5671e4c5f1ac"}
{"level":"info","ts":1617873971.0135608,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1617873971.6110563,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1617873971.6118536,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617873971.6119115,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617873971.6119232,"logger":"helm.controller","msg":"Watching resource","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","namespace":"","reconcilePeriod":"1m0s"}
I0408 09:26:11.612256       1 leaderelection.go:243] attempting to acquire leader lease kube-prometheus-stack-system/kube-prometheus-stack...
{"level":"info","ts":1617873971.6124713,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0408 09:26:29.014673       1 leaderelection.go:253] successfully acquired lease kube-prometheus-stack-system/kube-prometheus-stack
{"level":"debug","ts":1617873989.0147357,"logger":"controller-runtime.manager.events","msg":"Normal","object":{"kind":"ConfigMap","namespace":"kube-prometheus-stack-system","name":"kube-prometheus-stack","uid":"017e4b1f-54c4-4467-b03f-bf565d53ee5c","apiVersion":"v1","resourceVersion":"40132"},"reason":"LeaderElection","message":"kube-prometheus-stack-controller-manager-6789dcd479-nmqjf_f3f32247-4aa7-4e88-9dfa-3af70c809469 became leader"}
{"level":"info","ts":1617873989.0148664,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: helm-chart.example.com/v1, Kind=Promhelm"}
{"level":"info","ts":1617873989.115548,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting Controller"}
{"level":"info","ts":1617873989.1155875,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting workers","worker count":4}
{"level":"debug","ts":1617873989.1157281,"logger":"helm.controller","msg":"Reconciling","namespace":"default","name":"promhelm-sample","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm"}
$ kubectl get pods -n kube-prometheus-stack-system 
NAME                                                        READY   STATUS    RESTARTS   AGE
kube-prometheus-stack-controller-manager-6789dcd479-nmqjf   2/2     Running   4          8m50s
$ kubectl get Promhelm --all-namespaces
NAMESPACE   NAME              AGE
default     promhelm-sample   5m15s
$ kubectl get pods 
No resources found in default namespace.
$ kubectl get pods -n kube-prometheus-stack-system 
NAME                                                        READY   STATUS             RESTARTS   AGE
kube-prometheus-stack-controller-manager-6789dcd479-nmqjf   1/2     CrashLoopBackOff   12         49m
$ kubectl describe pod kube-prometheus-stack-controller-manager-6789dcd479-nmqjf -n kube-prometheus-stack-system
Name:         kube-prometheus-stack-controller-manager-6789dcd479-nmqjf
Namespace:    kube-prometheus-stack-system
Priority:     0
Node:         sumologic-kubernetes-collection-operator/10.0.2.15
Start Time:   Thu, 08 Apr 2021 09:19:17 +0000
Labels:       control-plane=controller-manager
              pod-template-hash=6789dcd479
Annotations:  cni.projectcalico.org/podIP: 10.1.148.12/32
              cni.projectcalico.org/podIPs: 10.1.148.12/32
Status:       Running
IP:           10.1.148.12
IPs:
  IP:           10.1.148.12
Controlled By:  ReplicaSet/kube-prometheus-stack-controller-manager-6789dcd479
Containers:
  kube-rbac-proxy:
    Container ID:  containerd://6b86eaa14a4dbaf980caf911818951968dce0abf8f84f049c81aca6260cfd75d
    Image:         gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
    Image ID:      gcr.io/kubebuilder/kube-rbac-proxy@sha256:a06e7b56c5e1e63b87abb417344f59bf4a8e53695b8463121537c3854c5fda82
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Thu, 08 Apr 2021 09:19:18 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-controller-manager-token-q7cks (ro)
  manager:
    Container ID:  containerd://c052bd69dad7c106a71f5439890e345d8c730e2e9fe8531cf0bd99ee8dc74fcd
    Image:         localhost:32000/prom-operator:master
    Image ID:      localhost:32000/prom-operator@sha256:23f325cc84af8f317a14b6e3b19cbed64668efa0bf10fb93b91e54e4e152e8d4
    Port:          <none>
    Host Port:     <none>
    Args:
      --health-probe-bind-address=:8081
      --metrics-bind-address=127.0.0.1:8080
      --leader-elect
      --leader-election-id=kube-prometheus-stack
      --zap-log-level=debug
    State:          Running
      Started:      Thu, 08 Apr 2021 09:30:02 +0000
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Thu, 08 Apr 2021 09:27:42 +0000
      Finished:     Thu, 08 Apr 2021 09:28:34 +0000
    Ready:          True
    Restart Count:  5
    Limits:
      cpu:     100m
      memory:  90Mi
    Requests:
      cpu:        100m
      memory:     60Mi
    Liveness:     http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:    http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-controller-manager-token-q7cks (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-prometheus-stack-controller-manager-token-q7cks:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-prometheus-stack-controller-manager-token-q7cks
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  11m                   default-scheduler  Successfully assigned kube-prometheus-stack-system/kube-prometheus-stack-controller-manager-6789dcd479-nmqjf to sumologic-kubernetes-collection-operator
  Normal   Pulled     11m                   kubelet            Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0" already present on machine
  Normal   Created    11m                   kubelet            Created container kube-rbac-proxy
  Normal   Started    11m                   kubelet            Started container kube-rbac-proxy
  Warning  Unhealthy  5m54s                 kubelet            Readiness probe failed: Get "http://10.1.148.12:8081/readyz": dial tcp 10.1.148.12:8081: connect: connection refused
  Normal   Started    4m28s (x4 over 11m)   kubelet            Started container manager
  Normal   Pulled     2m56s (x5 over 11m)   kubelet            Container image "localhost:32000/prom-operator:master" already present on machine
  Normal   Created    2m56s (x5 over 11m)   kubelet            Created container manager
  Warning  BackOff    75s (x13 over 5m54s)  kubelet            Back-off restarting failed container

What did you expect to see?

What did you see instead? Under which circumstances?

Environment

Operator type: helm

Kubernetes cluster type:

$ operator-sdk version

$ operator-sdk version
operator-sdk version: "scorecard-kuttl/v2.0.0-49-g50825522", commit: "508255228591ac9b2b4f45034fb9329cbfe1ef22", kubernetes version: "v1.19.4", go version: "go1.15.7", GOOS: "linux", GOARCH: "amd64"

$ go version (if language is Go)

$ go version
go version go1.15.7 linux/amd64

$ kubectl version

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.9-34+69a399560680be", GitCommit:"69a399560680be89d9b60c220c82560cf89adfaf", GitTreeState:"clean", BuildDate:"2021-03-22T17:04:05Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.9-34+69a399560680be", GitCommit:"69a399560680be89d9b60c220c82560cf89adfaf", GitTreeState:"clean", BuildDate:"2021-03-22T17:05:12Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}

$ helm version

$ helm version
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /var/snap/microk8s/current/credentials/kubelet.config
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
$ helm search repo prometheus-community/kube-prometheus-stack
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /var/snap/microk8s/current/credentials/kubelet.config
NAME                                            CHART VERSION   APP VERSION     DESCRIPTION                                       
prometheus-community/kube-prometheus-stack      14.5.0          0.46.0          kube-prometheus-stack collects Kubernetes manif...

Possible Solution

Additional context

I made exactly the same steps for prometheus-community/prometheus-node-exporter Helm chart and the issue did not appear and everything worked as expected.

kasia-kujawa commented 3 years ago

Yesterday I tested the version from https://github.com/operator-framework/operator-sdk/commit/6b6c021400894281b0d35033e83b5671e4c5f1ac and I saw that resources for kube-prometheus-stack are created when I applied simpler CRD which was provided under the issue #4636.

I reproduced the issue on version from https://github.com/operator-framework/operator-sdk/commit/6b6c021400894281b0d35033e83b5671e4c5f1ac and I saw the same behaviour but when I applied simpler configuration resources for kube-prometheus-stack were created but Pod for helm operator is restarted quite frequently without any errors in logs and after several minutes go to CrashLoopBackOff.

See my steps:

cd $GOPATH/src/github.com/operator-framework/operator-sdk/
$ git log -n 1
commit 6b6c021400894281b0d35033e83b5671e4c5f1ac (HEAD -> master)
Author: Eric Stroczynski <estroczy@redhat.com>
Date:   Mon Mar 29 14:57:39 2021 -0700

    internal/plugins: follow-up updates from #4581 (#4707)

    Signed-off-by: Eric Stroczynski <ericstroczynski@gmail.com>
docker buildx build --platform linux/amd64 -t localhost:32000/helm-operator:6b6c0214 --push -f images/helm-operator/Dockerfile .
$ docker images
REPOSITORY                      TAG        IMAGE ID       CREATED          SIZE
localhost:32000/helm-operator   6b6c0214   5a4da601230f   12 seconds ago   161MB
make build
make install
$ operator-sdk version
operator-sdk version: "scorecard-kuttl/v2.0.0-43-g6b6c0214", commit: "6b6c021400894281b0d35033e83b5671e4c5f1ac", kubernetes version: "v1.19.4", go version: "go1.15.7", GOOS: "linux", GOARCH: "amd64"
mkdir $HOME/kube-prometheus-stack
cd $HOME/kube-prometheus-stack
$ operator-sdk init --plugins helm --helm-chart prometheus-community/kube-prometheus-stack --domain example.com --group helm-chart --version v1 --kind Promhelm
Creating the API:
$ operator-sdk create api --group helm-chart --version v1 --kind Promhelm --helm-chart prometheus-community/kube-prometheus-stack
Created helm-charts/kube-prometheus-stack
Generating RBAC rules
WARN[0006] The RBAC rules generated in config/rbac/role.yaml are based on the chart's default manifest. Some rules may be missing for resources that are only enabled with custom values, and some existing rules may be overly broad. Double check the rules generated in config/rbac/role.yaml to ensure they meet the operator's permission requirements. 

Modified Dockerfile to use image from desired version:

# Build the manager binary
FROM localhost:32000/helm-operator:6b6c0214

ENV HOME=/opt/helm
COPY watches.yaml ${HOME}/watches.yaml
COPY helm-charts  ${HOME}/helm-charts
WORKDIR ${HOME}
make docker-build IMG="localhost:32000/prom-operator:latest"
make docker-push IMG="localhost:32000/prom-operator:latest"
make deploy IMG="localhost:32000/prom-operator:latest"
$ kubectl get pods -n kube-prometheus-stack-system
NAME                                                      READY   STATUS    RESTARTS   AGE
kube-prometheus-stack-controller-manager-9c49595f-2l4vm   2/2     Running   0          14s
$ kubectl describe pod -n kube-prometheus-stack-system 
Name:         kube-prometheus-stack-controller-manager-9c49595f-2l4vm
Namespace:    kube-prometheus-stack-system
Priority:     0
Node:         sumologic-kubernetes-collection-operator/10.0.2.15
Start Time:   Thu, 08 Apr 2021 12:18:18 +0000
Labels:       control-plane=controller-manager
              pod-template-hash=9c49595f
Annotations:  cni.projectcalico.org/podIP: 10.1.148.15/32
              cni.projectcalico.org/podIPs: 10.1.148.15/32
Status:       Running
IP:           10.1.148.15
IPs:
  IP:           10.1.148.15
Controlled By:  ReplicaSet/kube-prometheus-stack-controller-manager-9c49595f
Containers:
  kube-rbac-proxy:
    Container ID:  containerd://923d6be1930fa4ad8dd1d8c3a558ab0ec922233d6251ffecfc3d85007648c012
    Image:         gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
    Image ID:      gcr.io/kubebuilder/kube-rbac-proxy@sha256:a06e7b56c5e1e63b87abb417344f59bf4a8e53695b8463121537c3854c5fda82
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Thu, 08 Apr 2021 12:18:19 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-controller-manager-token-trztg (ro)
  manager:
    Container ID:  containerd://7a7777bfff0233f07dcb7b6b5ec364c43b93c3003a02e9a051ded02a67482536
    Image:         localhost:32000/prom-operator:latest
    Image ID:      localhost:32000/prom-operator@sha256:61ce10c111dee0f620e60676225c913bcb0d79e8bab816ecf0100da35c4d2236
    Port:          <none>
    Host Port:     <none>
    Args:
      --health-probe-bind-address=:8081
      --metrics-bind-address=127.0.0.1:8080
      --leader-elect
      --leader-election-id=kube-prometheus-stack
    State:          Running
      Started:      Thu, 08 Apr 2021 12:18:19 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  90Mi
    Requests:
      cpu:        100m
      memory:     60Mi
    Liveness:     http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:    http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-controller-manager-token-trztg (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-prometheus-stack-controller-manager-token-trztg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-prometheus-stack-controller-manager-token-trztg
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  47s   default-scheduler  Successfully assigned kube-prometheus-stack-system/kube-prometheus-stack-controller-manager-9c49595f-2l4vm to sumologic-kubernetes-collection-operator
  Normal  Pulled     46s   kubelet            Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0" already present on machine
  Normal  Created    46s   kubelet            Created container kube-rbac-proxy
  Normal  Started    46s   kubelet            Started container kube-rbac-proxy
  Normal  Pulling    46s   kubelet            Pulling image "localhost:32000/prom-operator:latest"
  Normal  Pulled     46s   kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 175.343247ms
  Normal  Created    46s   kubelet            Created container manager
  Normal  Started    46s   kubelet            Started container manager
$ kubectl apply -f config/samples/helm-chart_v1_promhelm.yaml -n kube-prometheus-stack-system 
promhelm.helm-chart.example.com/promhelm-sample created
$ kubectl get pods -n kube-prometheus-stack-system
NAME                                                      READY   STATUS    RESTARTS   AGE
kube-prometheus-stack-controller-manager-9c49595f-2l4vm   2/2     Running   0          2m4s
$ kubectl get pods -n kube-prometheus-stack-system
NAME                                                      READY   STATUS    RESTARTS   AGE
kube-prometheus-stack-controller-manager-9c49595f-2l4vm   2/2     Running   0          2m4s
$ kubectl logs -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-2l4vm manager 
{"level":"info","ts":1617884430.7197988,"logger":"cmd","msg":"Version","Go Version":"go1.15.11","GOOS":"linux","GOARCH":"amd64","helm-operator":"scorecard-kuttl/v2.0.0+git","commit":"6b6c021400894281b0d35033e83b5671e4c5f1ac"}
{"level":"info","ts":1617884430.7199905,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1617884431.2739377,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1617884431.2748816,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617884431.274924,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617884431.2749338,"logger":"helm.controller","msg":"Watching resource","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","namespace":"","reconcilePeriod":"1m0s"}
{"level":"info","ts":1617884431.2753308,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0408 12:20:31.275330       1 leaderelection.go:243] attempting to acquire leader lease kube-prometheus-stack-system/kube-prometheus-stack...
I0408 12:20:48.679177       1 leaderelection.go:253] successfully acquired lease kube-prometheus-stack-system/kube-prometheus-stack
{"level":"info","ts":1617884448.6794257,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: helm-chart.example.com/v1, Kind=Promhelm"}
{"level":"info","ts":1617884448.7798038,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting Controller"}
{"level":"info","ts":1617884448.779887,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting workers","worker count":4}
$ kubectl logs -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-2l4vm manager -p
{"level":"info","ts":1617884430.7197988,"logger":"cmd","msg":"Version","Go Version":"go1.15.11","GOOS":"linux","GOARCH":"amd64","helm-operator":"scorecard-kuttl/v2.0.0+git","commit":"6b6c021400894281b0d35033e83b5671e4c5f1ac"}
{"level":"info","ts":1617884430.7199905,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1617884431.2739377,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1617884431.2748816,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617884431.274924,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617884431.2749338,"logger":"helm.controller","msg":"Watching resource","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","namespace":"","reconcilePeriod":"1m0s"}
{"level":"info","ts":1617884431.2753308,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0408 12:20:31.275330       1 leaderelection.go:243] attempting to acquire leader lease kube-prometheus-stack-system/kube-prometheus-stack...
I0408 12:20:48.679177       1 leaderelection.go:253] successfully acquired lease kube-prometheus-stack-system/kube-prometheus-stack
{"level":"info","ts":1617884448.6794257,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: helm-chart.example.com/v1, Kind=Promhelm"}
{"level":"info","ts":1617884448.7798038,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting Controller"}
{"level":"info","ts":1617884448.779887,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting workers","worker count":4}
$ kubectl get pods --all-namespaces
NAMESPACE                      NAME                                                      READY   STATUS    RESTARTS   AGE
kube-system                    calico-node-k742j                                         1/1     Running   0          22h
kube-system                    hostpath-provisioner-5c65fbdb4f-pr25r                     1/1     Running   0          22h
kube-system                    calico-kube-controllers-847c8c99d-qwmb8                   1/1     Running   0          22h
container-registry             registry-9b57d9df8-fgr4t                                  1/1     Running   0          22h
kube-system                    coredns-86f78bb79c-ccvx9                                  1/1     Running   0          22h
kube-prometheus-stack-system   kube-prometheus-stack-controller-manager-9c49595f-2l4vm   2/2     Running   2          3m56s
$ kubectl get Promhelm --all-namespaces
NAMESPACE                      NAME              AGE
kube-prometheus-stack-system   promhelm-sample   3m7s
$ kubectl get pods -n kube-prometheus-stack-system
NAME                                                      READY   STATUS             RESTARTS   AGE
kube-prometheus-stack-controller-manager-9c49595f-2l4vm   1/2     CrashLoopBackOff   3          5m51s
$ kubectl describe pod -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-2l4vm 
Name:         kube-prometheus-stack-controller-manager-9c49595f-2l4vm
Namespace:    kube-prometheus-stack-system
Priority:     0
Node:         sumologic-kubernetes-collection-operator/10.0.2.15
Start Time:   Thu, 08 Apr 2021 12:18:18 +0000
Labels:       control-plane=controller-manager
              pod-template-hash=9c49595f
Annotations:  cni.projectcalico.org/podIP: 10.1.148.15/32
              cni.projectcalico.org/podIPs: 10.1.148.15/32
Status:       Running
IP:           10.1.148.15
IPs:
  IP:           10.1.148.15
Controlled By:  ReplicaSet/kube-prometheus-stack-controller-manager-9c49595f
Containers:
  kube-rbac-proxy:
    Container ID:  containerd://923d6be1930fa4ad8dd1d8c3a558ab0ec922233d6251ffecfc3d85007648c012
    Image:         gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
    Image ID:      gcr.io/kubebuilder/kube-rbac-proxy@sha256:a06e7b56c5e1e63b87abb417344f59bf4a8e53695b8463121537c3854c5fda82
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Thu, 08 Apr 2021 12:18:19 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-controller-manager-token-trztg (ro)
  manager:
    Container ID:  containerd://937b266b36cb80432c8e0abc6acb9612fb7f99e2875ed188ddae020b6de69fec
    Image:         localhost:32000/prom-operator:latest
    Image ID:      localhost:32000/prom-operator@sha256:61ce10c111dee0f620e60676225c913bcb0d79e8bab816ecf0100da35c4d2236
    Port:          <none>
    Host Port:     <none>
    Args:
      --health-probe-bind-address=:8081
      --metrics-bind-address=127.0.0.1:8080
      --leader-elect
      --leader-election-id=kube-prometheus-stack
    State:          Running
      Started:      Thu, 08 Apr 2021 12:24:24 +0000
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Thu, 08 Apr 2021 12:22:48 +0000
      Finished:     Thu, 08 Apr 2021 12:23:31 +0000
    Ready:          True
    Restart Count:  4
    Limits:
      cpu:     100m
      memory:  90Mi
    Requests:
      cpu:        100m
      memory:     60Mi
    Liveness:     http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:    http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-controller-manager-token-trztg (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-prometheus-stack-controller-manager-token-trztg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-prometheus-stack-controller-manager-token-trztg
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  6m34s                 default-scheduler  Successfully assigned kube-prometheus-stack-system/kube-prometheus-stack-controller-manager-9c49595f-2l4vm to sumologic-kubernetes-collection-operator
  Normal   Pulled     6m33s                 kubelet            Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0" already present on machine
  Normal   Created    6m33s                 kubelet            Created container kube-rbac-proxy
  Normal   Started    6m33s                 kubelet            Started container kube-rbac-proxy
  Normal   Pulled     6m33s                 kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 175.343247ms
  Warning  Unhealthy  4m23s                 kubelet            Readiness probe failed: Get "http://10.1.148.15:8081/readyz": dial tcp 10.1.148.15:8081: connect: connection refused
  Normal   Pulled     4m22s                 kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 44.107264ms
  Normal   Pulled     3m23s                 kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 23.664867ms
  Normal   Pulling    2m4s (x4 over 6m33s)  kubelet            Pulling image "localhost:32000/prom-operator:latest"
  Normal   Pulled     2m4s                  kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 24.778475ms
  Normal   Created    2m4s (x4 over 6m33s)  kubelet            Created container manager
  Normal   Started    2m4s (x4 over 6m33s)  kubelet            Started container manager
  Warning  BackOff    80s (x6 over 3m40s)   kubelet            Back-off restarting failed container
  $ kubectl logs -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-2l4vm manager -p
{"level":"info","ts":1617884665.0066597,"logger":"cmd","msg":"Version","Go Version":"go1.15.11","GOOS":"linux","GOARCH":"amd64","helm-operator":"scorecard-kuttl/v2.0.0+git","commit":"6b6c021400894281b0d35033e83b5671e4c5f1ac"}
{"level":"info","ts":1617884665.007016,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1617884665.562092,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1617884665.5630333,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617884665.5630617,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617884665.563072,"logger":"helm.controller","msg":"Watching resource","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","namespace":"","reconcilePeriod":"1m0s"}
{"level":"info","ts":1617884665.5634668,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0408 12:24:25.563445       1 leaderelection.go:243] attempting to acquire leader lease kube-prometheus-stack-system/kube-prometheus-stack...
I0408 12:24:42.962445       1 leaderelection.go:253] successfully acquired lease kube-prometheus-stack-system/kube-prometheus-stack
{"level":"info","ts":1617884682.9627075,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: helm-chart.example.com/v1, Kind=Promhelm"}
{"level":"info","ts":1617884683.0632606,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting Controller"}
{"level":"info","ts":1617884683.0632865,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting workers","worker count":4}
$ kubectl logs -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-2l4vm manager 
{"level":"info","ts":1617884665.0066597,"logger":"cmd","msg":"Version","Go Version":"go1.15.11","GOOS":"linux","GOARCH":"amd64","helm-operator":"scorecard-kuttl/v2.0.0+git","commit":"6b6c021400894281b0d35033e83b5671e4c5f1ac"}
{"level":"info","ts":1617884665.007016,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1617884665.562092,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1617884665.5630333,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617884665.5630617,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617884665.563072,"logger":"helm.controller","msg":"Watching resource","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","namespace":"","reconcilePeriod":"1m0s"}
{"level":"info","ts":1617884665.5634668,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0408 12:24:25.563445       1 leaderelection.go:243] attempting to acquire leader lease kube-prometheus-stack-system/kube-prometheus-stack...
I0408 12:24:42.962445       1 leaderelection.go:253] successfully acquired lease kube-prometheus-stack-system/kube-prometheus-stack
{"level":"info","ts":1617884682.9627075,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: helm-chart.example.com/v1, Kind=Promhelm"}
{"level":"info","ts":1617884683.0632606,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting Controller"}
{"level":"info","ts":1617884683.0632865,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting workers","worker count":4}
make undeploy
make deploy IMG="localhost:32000/prom-operator:latest"
$ kubectl get pods -n kube-prometheus-stack-system
NAME                                                      READY   STATUS    RESTARTS   AGE
kube-prometheus-stack-controller-manager-9c49595f-m6hlz   2/2     Running   0          24s

sample.yaml:

apiVersion: helm-chart.example.com/v1
kind: Promhelm
metadata:
  name: promhelm-sample
spec:
  additionalPrometheusRulesMap:
    some-rules:
      groups:
        - name: foo
          rules:
            - expr: bar
              record: baz
  alertmanager:
    enabled: false
  grafana:
    enabled: false
    defaultDashboardsEnabled: false
  prometheusOperator:
    admissionWebhooks:
      enabled: false
    tls:
      enabled: false
kubectl apply -f sample.yaml
$ kubectl get pods 
NAME                                                  READY   STATUS    RESTARTS   AGE
promhelm-sample-kube-prome-operator-b5f68465b-r6jcj   1/1     Running   0          42s
promhelm-sample-prometheus-node-exporter-jd7c4        1/1     Running   0          43s
prometheus-promhelm-sample-kube-prome-prometheus-0    2/2     Running   1          40s
promhelm-sample-kube-state-metrics-7f7c9dd764-ws6x8   1/1     Running   0          43s
$ kubectl get pods -n kube-prometheus-stack-system
NAME                                                      READY   STATUS    RESTARTS   AGE
kube-prometheus-stack-controller-manager-9c49595f-m6hlz   2/2     Running   3          11m
$ kubectl describe pod -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-m6hlz
Name:         kube-prometheus-stack-controller-manager-9c49595f-m6hlz
Namespace:    kube-prometheus-stack-system
Priority:     0
Node:         sumologic-kubernetes-collection-operator/10.0.2.15
Start Time:   Thu, 08 Apr 2021 12:28:51 +0000
Labels:       control-plane=controller-manager
              pod-template-hash=9c49595f
Annotations:  cni.projectcalico.org/podIP: 10.1.148.16/32
              cni.projectcalico.org/podIPs: 10.1.148.16/32
Status:       Running
IP:           10.1.148.16
IPs:
  IP:           10.1.148.16
Controlled By:  ReplicaSet/kube-prometheus-stack-controller-manager-9c49595f
Containers:
  kube-rbac-proxy:
    Container ID:  containerd://c2690044d06512b64e0a2a7fbcdce5355f822bbe0a548e2065b8a06a0689e04d
    Image:         gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
    Image ID:      gcr.io/kubebuilder/kube-rbac-proxy@sha256:a06e7b56c5e1e63b87abb417344f59bf4a8e53695b8463121537c3854c5fda82
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Thu, 08 Apr 2021 12:28:53 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-controller-manager-token-gr4qn (ro)
  manager:
    Container ID:  containerd://0bbf3772c4142602b966b374db3bc0ec246820dfec837ea26da6656fc74dcd69
    Image:         localhost:32000/prom-operator:latest
    Image ID:      localhost:32000/prom-operator@sha256:61ce10c111dee0f620e60676225c913bcb0d79e8bab816ecf0100da35c4d2236
    Port:          <none>
    Host Port:     <none>
    Args:
      --health-probe-bind-address=:8081
      --metrics-bind-address=127.0.0.1:8080
      --leader-elect
      --leader-election-id=kube-prometheus-stack
    State:          Running
      Started:      Thu, 08 Apr 2021 12:39:48 +0000
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Thu, 08 Apr 2021 12:36:31 +0000
      Finished:     Thu, 08 Apr 2021 12:39:16 +0000
    Ready:          True
    Restart Count:  3
    Limits:
      cpu:     100m
      memory:  90Mi
    Requests:
      cpu:        100m
      memory:     60Mi
    Liveness:     http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:    http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-controller-manager-token-gr4qn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-prometheus-stack-controller-manager-token-gr4qn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-prometheus-stack-controller-manager-token-gr4qn
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  12m                   default-scheduler  Successfully assigned kube-prometheus-stack-system/kube-prometheus-stack-controller-manager-9c49595f-m6hlz to sumologic-kubernetes-collection-operator
  Normal   Pulled     12m                   kubelet            Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0" already present on machine
  Normal   Created    12m                   kubelet            Created container kube-rbac-proxy
  Normal   Started    12m                   kubelet            Started container kube-rbac-proxy
  Normal   Pulled     12m                   kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 40.39793ms
  Normal   Pulled     9m12s                 kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 77.804653ms
  Normal   Pulled     5m1s                  kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 33.196248ms
  Warning  BackOff    118s (x4 over 5m14s)  kubelet            Back-off restarting failed container
  Normal   Pulling    104s (x4 over 12m)    kubelet            Pulling image "localhost:32000/prom-operator:latest"
  Normal   Pulled     104s                  kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 34.870757ms
  Normal   Created    104s (x4 over 12m)    kubelet            Created container manager
  Normal   Started    104s (x4 over 12m)    kubelet            Started container manager
$ kubectl logs -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-m6hlz manager 
{"level":"info","ts":1617885589.0111742,"logger":"cmd","msg":"Version","Go Version":"go1.15.11","GOOS":"linux","GOARCH":"amd64","helm-operator":"scorecard-kuttl/v2.0.0+git","commit":"6b6c021400894281b0d35033e83b5671e4c5f1ac"}
{"level":"info","ts":1617885589.0114315,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1617885589.6114593,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1617885589.6125834,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885589.6126118,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885589.6126218,"logger":"helm.controller","msg":"Watching resource","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","namespace":"","reconcilePeriod":"1m0s"}
{"level":"info","ts":1617885589.6130738,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0408 12:39:49.613069       1 leaderelection.go:243] attempting to acquire leader lease kube-prometheus-stack-system/kube-prometheus-stack...
I0408 12:40:07.017515       1 leaderelection.go:253] successfully acquired lease kube-prometheus-stack-system/kube-prometheus-stack
{"level":"info","ts":1617885607.0180488,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: helm-chart.example.com/v1, Kind=Promhelm"}
{"level":"info","ts":1617885607.1210427,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting Controller"}
{"level":"info","ts":1617885607.1211226,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting workers","worker count":4}
{"level":"info","ts":1617885617.4087298,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885617.408756,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: policy/v1beta1, Kind=PodSecurityPolicy"}
{"level":"info","ts":1617885617.5090806,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"policy/v1beta1","kind":"PodSecurityPolicy"}
{"level":"info","ts":1617885617.5092545,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885617.509262,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"}
{"level":"info","ts":1617885617.609786,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding"}
{"level":"info","ts":1617885617.6101124,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885617.6101496,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=ServiceMonitor"}
{"level":"info","ts":1617885617.710829,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor"}
{"level":"info","ts":1617885617.7112002,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885617.7112153,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=PrometheusRule"}
{"level":"info","ts":1617885617.8114567,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"PrometheusRule"}
{"level":"info","ts":1617885617.8125825,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885617.812598,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=Service"}
{"level":"info","ts":1617885617.9143608,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"v1","kind":"Service"}
{"level":"info","ts":1617885617.9153116,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885617.9154415,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=ServiceAccount"}
{"level":"info","ts":1617885618.0158792,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"v1","kind":"ServiceAccount"}
{"level":"info","ts":1617885618.016231,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885618.0162437,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: apps/v1, Kind=Deployment"}
{"level":"info","ts":1617885618.1166298,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"apps/v1","kind":"Deployment"}
{"level":"info","ts":1617885618.1170475,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885618.1170623,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=Prometheus"}
{"level":"info","ts":1617885618.257182,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus"}
{"level":"info","ts":1617885618.2598388,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885618.2598763,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRole"}
{"level":"info","ts":1617885618.3602767,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole"}
{"level":"info","ts":1617885618.3641388,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885618.3641825,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: apps/v1, Kind=DaemonSet"}
{"level":"info","ts":1617885618.5082662,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"apps/v1","kind":"DaemonSet"}
{"level":"info","ts":1617885618.5151618,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"promhelm-sample","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","release":"promhelm-sample"}
{"level":"info","ts":1617885689.213356,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"promhelm-sample","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","release":"promhelm-sample"}
$ kubectl logs -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-m6hlz manager -p
{"level":"info","ts":1617885392.0094311,"logger":"cmd","msg":"Version","Go Version":"go1.15.11","GOOS":"linux","GOARCH":"amd64","helm-operator":"scorecard-kuttl/v2.0.0+git","commit":"6b6c021400894281b0d35033e83b5671e4c5f1ac"}
{"level":"info","ts":1617885392.0097,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1617885392.5659904,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1617885392.5668662,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885392.5668905,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885392.5668995,"logger":"helm.controller","msg":"Watching resource","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","namespace":"","reconcilePeriod":"1m0s"}
{"level":"info","ts":1617885392.5672748,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0408 12:36:32.567256       1 leaderelection.go:243] attempting to acquire leader lease kube-prometheus-stack-system/kube-prometheus-stack...
I0408 12:36:49.976819       1 leaderelection.go:253] successfully acquired lease kube-prometheus-stack-system/kube-prometheus-stack
{"level":"info","ts":1617885409.9772243,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: helm-chart.example.com/v1, Kind=Promhelm"}
{"level":"info","ts":1617885410.0789983,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting Controller"}
{"level":"info","ts":1617885410.0790343,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting workers","worker count":4}
{"level":"info","ts":1617885420.4141154,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885420.4141743,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=Service"}
{"level":"info","ts":1617885420.6142578,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"v1","kind":"Service"}
{"level":"info","ts":1617885420.6146474,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885420.6146653,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=ServiceMonitor"}
{"level":"info","ts":1617885420.7150357,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor"}
{"level":"info","ts":1617885420.7153766,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885420.715394,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRole"}
{"level":"info","ts":1617885420.863868,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole"}
{"level":"info","ts":1617885420.8646004,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885420.864618,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=Prometheus"}
{"level":"info","ts":1617885420.9706144,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus"}
{"level":"info","ts":1617885420.970929,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885420.9709432,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=PrometheusRule"}
{"level":"info","ts":1617885421.071404,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"PrometheusRule"}
{"level":"info","ts":1617885421.071789,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885421.071804,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=ServiceAccount"}
{"level":"info","ts":1617885421.1719925,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"v1","kind":"ServiceAccount"}
{"level":"info","ts":1617885421.172388,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885421.1724393,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"}
{"level":"info","ts":1617885421.2729137,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding"}
{"level":"info","ts":1617885421.2758267,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885421.2758417,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: policy/v1beta1, Kind=PodSecurityPolicy"}
{"level":"info","ts":1617885421.3760803,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"policy/v1beta1","kind":"PodSecurityPolicy"}
{"level":"info","ts":1617885421.3806233,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885421.3806388,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: apps/v1, Kind=Deployment"}
{"level":"info","ts":1617885421.5051467,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"apps/v1","kind":"Deployment"}
{"level":"info","ts":1617885421.5117383,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617885421.5118723,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: apps/v1, Kind=DaemonSet"}
{"level":"info","ts":1617885421.812438,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"apps/v1","kind":"DaemonSet"}
{"level":"info","ts":1617885421.8141348,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"promhelm-sample","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","release":"promhelm-sample"}
{"level":"info","ts":1617885492.5111997,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"promhelm-sample","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","release":"promhelm-sample"}
$ kubectl get pods -n kube-prometheus-stack-system
NAME                                                      READY   STATUS             RESTARTS   AGE
kube-prometheus-stack-controller-manager-9c49595f-m6hlz   1/2     CrashLoopBackOff   5          25m
$ kubectl describe pod -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-m6hlz 
Name:         kube-prometheus-stack-controller-manager-9c49595f-m6hlz
Namespace:    kube-prometheus-stack-system
Priority:     0
Node:         sumologic-kubernetes-collection-operator/10.0.2.15
Start Time:   Thu, 08 Apr 2021 12:28:51 +0000
Labels:       control-plane=controller-manager
              pod-template-hash=9c49595f
Annotations:  cni.projectcalico.org/podIP: 10.1.148.16/32
              cni.projectcalico.org/podIPs: 10.1.148.16/32
Status:       Running
IP:           10.1.148.16
IPs:
  IP:           10.1.148.16
Controlled By:  ReplicaSet/kube-prometheus-stack-controller-manager-9c49595f
Containers:
  kube-rbac-proxy:
    Container ID:  containerd://c2690044d06512b64e0a2a7fbcdce5355f822bbe0a548e2065b8a06a0689e04d
    Image:         gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
    Image ID:      gcr.io/kubebuilder/kube-rbac-proxy@sha256:a06e7b56c5e1e63b87abb417344f59bf4a8e53695b8463121537c3854c5fda82
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Thu, 08 Apr 2021 12:28:53 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-controller-manager-token-gr4qn (ro)
  manager:
    Container ID:  containerd://36e68a4e6731023f9faa9eaad6091f6863a7a332695071055da0f38b80caa108
    Image:         localhost:32000/prom-operator:latest
    Image ID:      localhost:32000/prom-operator@sha256:61ce10c111dee0f620e60676225c913bcb0d79e8bab816ecf0100da35c4d2236
    Port:          <none>
    Host Port:     <none>
    Args:
      --health-probe-bind-address=:8081
      --metrics-bind-address=127.0.0.1:8080
      --leader-elect
      --leader-election-id=kube-prometheus-stack
    State:          Running
      Started:      Thu, 08 Apr 2021 12:55:35 +0000
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Thu, 08 Apr 2021 12:49:58 +0000
      Finished:     Thu, 08 Apr 2021 12:52:45 +0000
    Ready:          True
    Restart Count:  6
    Limits:
      cpu:     100m
      memory:  90Mi
    Requests:
      cpu:        100m
      memory:     60Mi
    Liveness:     http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:    http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-controller-manager-token-gr4qn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-prometheus-stack-controller-manager-token-gr4qn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-prometheus-stack-controller-manager-token-gr4qn
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  27m                   default-scheduler  Successfully assigned kube-prometheus-stack-system/kube-prometheus-stack-controller-manager-9c49595f-m6hlz to sumologic-kubernetes-collection-operator
  Normal   Pulled     27m                   kubelet            Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0" already present on machine
  Normal   Created    27m                   kubelet            Created container kube-rbac-proxy
  Normal   Started    27m                   kubelet            Started container kube-rbac-proxy
  Normal   Pulled     27m                   kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 40.39793ms
  Normal   Pulled     23m                   kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 77.804653ms
  Normal   Pulled     19m                   kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 33.196248ms
  Normal   Pulled     16m                   kubelet            Successfully pulled image "localhost:32000/prom-operator:latest" in 34.870757ms
  Normal   Created    16m (x4 over 27m)     kubelet            Created container manager
  Normal   Started    16m (x4 over 27m)     kubelet            Started container manager
  Normal   Pulling    10m (x5 over 27m)     kubelet            Pulling image "localhost:32000/prom-operator:latest"
  Warning  BackOff    2m10s (x22 over 19m)  kubelet            Back-off restarting failed container
$ kubectl logs -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-m6hlz manager -p
{"level":"info","ts":1617886198.9097118,"logger":"cmd","msg":"Version","Go Version":"go1.15.11","GOOS":"linux","GOARCH":"amd64","helm-operator":"scorecard-kuttl/v2.0.0+git","commit":"6b6c021400894281b0d35033e83b5671e4c5f1ac"}
{"level":"info","ts":1617886198.90987,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1617886199.512168,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1617886199.5129857,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886199.5130122,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886199.5130222,"logger":"helm.controller","msg":"Watching resource","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","namespace":"","reconcilePeriod":"1m0s"}
I0408 12:49:59.513374       1 leaderelection.go:243] attempting to acquire leader lease kube-prometheus-stack-system/kube-prometheus-stack...
{"level":"info","ts":1617886199.5134573,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0408 12:50:16.924652       1 leaderelection.go:253] successfully acquired lease kube-prometheus-stack-system/kube-prometheus-stack
{"level":"info","ts":1617886216.9249752,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: helm-chart.example.com/v1, Kind=Promhelm"}
{"level":"info","ts":1617886217.0261185,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting Controller"}
{"level":"info","ts":1617886217.0261621,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting workers","worker count":4}
{"level":"info","ts":1617886227.9078543,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886227.9079375,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: apps/v1, Kind=DaemonSet"}
{"level":"info","ts":1617886228.0095317,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"apps/v1","kind":"DaemonSet"}
{"level":"info","ts":1617886228.0105443,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886228.01056,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=PrometheusRule"}
{"level":"info","ts":1617886228.111435,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"PrometheusRule"}
{"level":"info","ts":1617886228.1117144,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886228.111728,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: policy/v1beta1, Kind=PodSecurityPolicy"}
{"level":"info","ts":1617886228.2127957,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"policy/v1beta1","kind":"PodSecurityPolicy"}
{"level":"info","ts":1617886228.213065,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886228.2130797,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRole"}
{"level":"info","ts":1617886228.3135145,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole"}
{"level":"info","ts":1617886228.313731,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886228.3137429,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"}
{"level":"info","ts":1617886228.4204497,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding"}
{"level":"info","ts":1617886228.422286,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886228.4223137,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=ServiceMonitor"}
{"level":"info","ts":1617886228.5226977,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor"}
{"level":"info","ts":1617886228.5236444,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886228.5237048,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=Service"}
{"level":"info","ts":1617886228.6241407,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"v1","kind":"Service"}
{"level":"info","ts":1617886228.6255262,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886228.6255434,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: apps/v1, Kind=Deployment"}
{"level":"info","ts":1617886228.8077042,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"apps/v1","kind":"Deployment"}
{"level":"info","ts":1617886228.8109655,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886228.81131,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=ServiceAccount"}
{"level":"info","ts":1617886228.9119663,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"v1","kind":"ServiceAccount"}
{"level":"info","ts":1617886228.9197288,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886228.919747,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=Prometheus"}
{"level":"info","ts":1617886229.0207133,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus"}
{"level":"info","ts":1617886229.0234544,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"promhelm-sample","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","release":"promhelm-sample"}
{"level":"info","ts":1617886300.1119926,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"promhelm-sample","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","release":"promhelm-sample"}
$ kubectl logs -n kube-prometheus-stack-system kube-prometheus-stack-controller-manager-9c49595f-m6hlz manager 
{"level":"info","ts":1617886535.908826,"logger":"cmd","msg":"Version","Go Version":"go1.15.11","GOOS":"linux","GOARCH":"amd64","helm-operator":"scorecard-kuttl/v2.0.0+git","commit":"6b6c021400894281b0d35033e83b5671e4c5f1ac"}
{"level":"info","ts":1617886535.909052,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}
{"level":"info","ts":1617886536.4631674,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"}
{"level":"info","ts":1617886536.4638088,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886536.4638343,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886536.4638429,"logger":"helm.controller","msg":"Watching resource","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","namespace":"","reconcilePeriod":"1m0s"}
I0408 12:55:36.464046       1 leaderelection.go:243] attempting to acquire leader lease kube-prometheus-stack-system/kube-prometheus-stack...
{"level":"info","ts":1617886536.4643562,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0408 12:55:53.875118       1 leaderelection.go:253] successfully acquired lease kube-prometheus-stack-system/kube-prometheus-stack
{"level":"info","ts":1617886553.875273,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: helm-chart.example.com/v1, Kind=Promhelm"}
{"level":"info","ts":1617886553.9759536,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting Controller"}
{"level":"info","ts":1617886553.9759893,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting workers","worker count":4}
{"level":"info","ts":1617886564.4114046,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886564.4114366,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: policy/v1beta1, Kind=PodSecurityPolicy"}
{"level":"info","ts":1617886564.5128248,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"policy/v1beta1","kind":"PodSecurityPolicy"}
{"level":"info","ts":1617886564.5131855,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886564.5132465,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRole"}
{"level":"info","ts":1617886564.6135826,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole"}
{"level":"info","ts":1617886564.6137886,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886564.6137989,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"}
{"level":"info","ts":1617886564.7142005,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding"}
{"level":"info","ts":1617886564.715091,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886564.715105,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=PrometheusRule"}
{"level":"info","ts":1617886564.8153148,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"PrometheusRule"}
{"level":"info","ts":1617886564.8159556,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886564.8159695,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=Service"}
{"level":"info","ts":1617886564.916178,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"v1","kind":"Service"}
{"level":"info","ts":1617886564.919491,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886564.9195113,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=ServiceMonitor"}
{"level":"info","ts":1617886565.0196323,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor"}
{"level":"info","ts":1617886565.0211432,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886565.021177,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: monitoring.coreos.com/v1, Kind=Prometheus"}
{"level":"info","ts":1617886565.1213982,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus"}
{"level":"info","ts":1617886565.1226847,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886565.1227014,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=ServiceAccount"}
{"level":"info","ts":1617886565.2251754,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"v1","kind":"ServiceAccount"}
{"level":"info","ts":1617886565.2256575,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886565.2256737,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: apps/v1, Kind=Deployment"}
{"level":"info","ts":1617886565.3258684,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"apps/v1","kind":"Deployment"}
{"level":"info","ts":1617886565.330389,"logger":"controller-runtime.injectors-warning","msg":"Injectors are deprecated, and will be removed in v0.10.x"}
{"level":"info","ts":1617886565.3304071,"logger":"controller-runtime.manager.controller.promhelm-controller","msg":"Starting EventSource","source":"kind source: apps/v1, Kind=DaemonSet"}
{"level":"info","ts":1617886565.4307084,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"helm-chart.example.com/v1","ownerKind":"Promhelm","apiVersion":"apps/v1","kind":"DaemonSet"}
{"level":"info","ts":1617886565.43179,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"promhelm-sample","apiVersion":"helm-chart.example.com/v1","kind":"Promhelm","release":"promhelm-sample"}
kasia-kujawa commented 3 years ago

I've found the reason in kubectl describe output:

Last State:     Terminated
      Reason:       OOMKilled

so I think that assigning sufficient resources for Pod will solve this issue.