canonical / grafana-k8s-operator

https://charmhub.io/grafana-k8s
Apache License 2.0
6 stars 22 forks source link

Charm container stuck in crash loop backoff when using Juju 2.9.35 (VSphere) #141

Closed stonepreston closed 1 year ago

stonepreston commented 2 years ago

Bug Description

The charm container gets stuck in a crash loop when deploying grafana-k8s using juju 2.9.35. The other containers (grafana and litestream) both have a ready status. This does not happen in microk8s, but does happen in Charmed-Kubernetes/Kubernetes core running in VSphere. I had previously opened issue where it was noted that when grafana is deployed the unit is rebooted/restarted, which seemed uncommon, so it might be some vsphere wonkyness at play.

I was able to deploy grafana on 2.9.34 as well as 2.9.33 so it seems related to 2.9.35 changes.I also deployed prometheus-k8s to see if this was an issue affecting other k8s charms, but prometheus did not seem to have problems and went active/idle after a minute or 2.

To Reproduce

  1. Bootstrap a 2.9.35 controller on VSphere
  2. Set model defaults: juju model-defaults vsphere juju-http-proxy=http://squid.internal:3128 apt-http-proxy=http://squid.internal:3128 snap-http-proxy=http://squid.internal:3128 juju-https-proxy=http://squid.internal:3128 apt-https-proxy=http://squid.internal:3128 snap-https-proxy=http://squid.internal:3128 apt-no-proxy=localhost,127.0.0.1,ppa.launchpad.net,launchpad.net juju-no-proxy=localhost,127.0.0.1,0.0.0.0,ppa.launchpad.net,launchpad.net,10.0.8.0/24,10.246.154.0/24
  3. Add model for k8s-core: juju add-model --config enable-os-refresh-update=false --config enable-os-upgrade=false --config logging-config='<root>=DEBUG' --config datastore=vsanDatastore --config primary-network=$YOUR_VLAN_HERE k8s-core vsphere/Boston
  4. Deploy k8s-core: juju deploy kubernetes-core --overlay vsphere-overlay.yaml --trust --debug --channel edge The overlay file yaml looks like this:
    description: Charmed Kubernetes overlay to add native vSphere support.
    applications:
    vsphere-integrator:
    annotations:
      gui-x: "600"
      gui-y: "300"
    charm: vsphere-integrator
    num_units: 1
    trust: true
    options:
      datastore: vsanDatastore
      folder: k8s-crew-root
    relations:
    - ['vsphere-integrator', 'kubernetes-control-plane']
    - ['vsphere-integrator', 'kubernetes-worker']
  5. Copy kubeconfig: juju scp kubernetes-control-plane/0:config ~/.kube/config
  6. Apply storage class: kubectl apply -f vsphere-storageclass.yaml The storage class yaml looks like this:
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: mystorage
    provisioner: kubernetes.io/vsphere-volume
    parameters:
    diskformat: zeroedthick
  7. add a k8s-cloud: juju add-k8s $YOUR_K8S_CLOUD --controller $YOUR_CONTROLLER --storage mystorage
  8. Deploy grafana-k8s: juju deploy grafana-k8s --channel edge --trust

Environment

Juju version being used in 2.9.35. Cloud being used to deploy charms into is the Boston vsphere cloud. Kubernetes 1.25 is being deployed as part of the edge kubernetes-core bundle (a slimmed down version of charmed-kubernetes).

As mentioned above this does not happen on 2.9.34 or 2.9.33 juju versions. It is isolated to the newly released 2.9.35.

Relevant log output

# kubectl describe
kubectl describe pods -n stonepreston-cos
Name:             grafana-k8s-0
Namespace:        stonepreston-cos
Priority:         0
Service Account:  grafana-k8s
Node:             juju-ab7ac4-1/10.246.154.153
Start Time:       Mon, 17 Oct 2022 09:58:38 -0500
Labels:           app.kubernetes.io/name=grafana-k8s
                  controller-revision-hash=grafana-k8s-7696977bcb
                  statefulset.kubernetes.io/pod-name=grafana-k8s-0
Annotations:      controller.juju.is/id: 377150c5-51aa-422e-8707-d621b5754511
                  juju.is/version: 2.9.35
                  model.juju.is/id: 82239d5a-57f7-40a4-88e9-a3727951ff02
                  unit.juju.is/id: grafana-k8s/0
Status:           Running
IP:               192.168.102.136
IPs:
  IP:           192.168.102.136
Controlled By:  StatefulSet/grafana-k8s
Init Containers:
  charm-init:
    Container ID:  containerd://8b46c907e0c0aa34f39cd6dd83959bd39a33c55d1ad18246a1731e8a47258192
    Image:         rocks.canonical.com/cdk/jujusolutions/jujud-operator:2.9.35
    Image ID:      rocks.canonical.com/cdk/jujusolutions/jujud-operator@sha256:b5313b7611b82efd9ac96a0d3c8da5e30e87aa11ff3b17ab71ee2c6a68cba758
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/containeragent
    Args:
      init
      --containeragent-pebble-dir
      /containeragent/pebble
      --charm-modified-version
      0
      --data-dir
      /var/lib/juju
      --bin-dir
      /charm/bin
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 17 Oct 2022 09:58:53 -0500
      Finished:     Mon, 17 Oct 2022 09:58:53 -0500
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      grafana-k8s-application-config  Secret  Optional: false
    Environment:
      JUJU_CONTAINER_NAMES:  grafana,litestream
      JUJU_K8S_POD_NAME:     grafana-k8s-0 (v1:metadata.name)
      JUJU_K8S_POD_UUID:      (v1:metadata.uid)
    Mounts:
      /charm/bin from charm-data (rw,path="charm/bin")
      /charm/containers from charm-data (rw,path="charm/containers")
      /containeragent/pebble from charm-data (rw,path="containeragent/pebble")
      /var/lib/juju from charm-data (rw,path="var/lib/juju")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mftr7 (ro)
Containers:
  charm:
    Container ID:  containerd://b975a221c7d6e27d8203e964881a1d8798910b9f9b04591fdebb3124d9f67f33
    Image:         rocks.canonical.com/cdk/jujusolutions/charm-base:ubuntu-20.04
    Image ID:      rocks.canonical.com/cdk/jujusolutions/charm-base@sha256:5ccefd1a92d63baa961680c22a47e01213c99e9c06280c732a1910a5c126f2d2
    Port:          <none>
    Host Port:     <none>
    Command:
      /charm/bin/pebble
    Args:
      run
      --http
      :38812
      --verbose
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 17 Oct 2022 10:05:23 -0500
      Finished:     Mon, 17 Oct 2022 10:05:58 -0500
    Ready:          False
    Restart Count:  5
    Liveness:       http-get http://:38812/v1/health%3Flevel=alive delay=30s timeout=1s period=5s #success=1 #failure=1
    Readiness:      http-get http://:38812/v1/health%3Flevel=ready delay=30s timeout=1s period=5s #success=1 #failure=1
    Environment:
      JUJU_CONTAINER_NAMES:  grafana,litestream
      HTTP_PROBE_PORT:       3856
    Mounts:
      /charm/bin from charm-data (ro,path="charm/bin")
      /charm/containers from charm-data (rw,path="charm/containers")
      /var/lib/juju from charm-data (rw,path="var/lib/juju")
      /var/lib/juju/storage/database/0 from grafana-k8s-database-163432a4 (rw)
      /var/lib/pebble/default from charm-data (rw,path="containeragent/pebble")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mftr7 (ro)
  grafana:
    Container ID:  containerd://937ed7f7d97046e0e19fb918750471946e0b328b2d4db6c179d7765c20a07dfc
    Image:         registry.jujucharms.com/charm/h71m6jk2jeap1qu5lv9nv5mplqayr91q34lqp/grafana-image@sha256:cb9b47b4a53ae5f3da0fe40157e8eb20d0e120ae76da4dd9296f4b6c8ee62520
    Image ID:      registry.jujucharms.com/charm/h71m6jk2jeap1qu5lv9nv5mplqayr91q34lqp/grafana-image@sha256:cb9b47b4a53ae5f3da0fe40157e8eb20d0e120ae76da4dd9296f4b6c8ee62520
    Port:          <none>
    Host Port:     <none>
    Command:
      /charm/bin/pebble
    Args:
      run
      --create-dirs
      --hold
      --http
      :38813
      --verbose
    State:          Running
      Started:      Mon, 17 Oct 2022 09:59:54 -0500
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:38813/v1/health%3Flevel=alive delay=30s timeout=1s period=5s #success=1 #failure=1
    Readiness:      http-get http://:38813/v1/health%3Flevel=ready delay=30s timeout=1s period=5s #success=1 #failure=1
    Environment:
      JUJU_CONTAINER_NAME:  grafana
      PEBBLE_SOCKET:        /charm/container/pebble.socket
    Mounts:
      /charm/bin/pebble from charm-data (ro,path="charm/bin/pebble")
      /charm/container from charm-data (rw,path="charm/containers/grafana")
      /var/lib/grafana from grafana-k8s-database-163432a4 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mftr7 (ro)
  litestream:
    Container ID:  containerd://5484bef6a363f50d77cbe6671059c9269502b9a72bb29dfb8c8e655467e134a9
    Image:         registry.jujucharms.com/charm/h71m6jk2jeap1qu5lv9nv5mplqayr91q34lqp/litestream-image@sha256:8ab4b042f6c84ec51cabd5a9caef7b5394080c88fa1d7c445f201780e39e8ea7
    Image ID:      registry.jujucharms.com/charm/h71m6jk2jeap1qu5lv9nv5mplqayr91q34lqp/litestream-image@sha256:8ab4b042f6c84ec51cabd5a9caef7b5394080c88fa1d7c445f201780e39e8ea7
    Port:          <none>
    Host Port:     <none>
    Command:
      /charm/bin/pebble
    Args:
      run
      --create-dirs
      --hold
      --http
      :38814
      --verbose
    State:          Running
      Started:      Mon, 17 Oct 2022 10:00:01 -0500
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:38814/v1/health%3Flevel=alive delay=30s timeout=1s period=5s #success=1 #failure=1
    Readiness:      http-get http://:38814/v1/health%3Flevel=ready delay=30s timeout=1s period=5s #success=1 #failure=1
    Environment:
      JUJU_CONTAINER_NAME:  litestream
      PEBBLE_SOCKET:        /charm/container/pebble.socket
    Mounts:
      /charm/bin/pebble from charm-data (ro,path="charm/bin/pebble")
      /charm/container from charm-data (rw,path="charm/containers/litestream")
      /var/lib/grafana from grafana-k8s-database-163432a4 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mftr7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  grafana-k8s-database-163432a4:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  grafana-k8s-database-163432a4-grafana-k8s-0
    ReadOnly:   false
  charm-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-mftr7:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/arch=amd64
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                     From                     Message
  ----     ------                  ----                    ----                     -------
  Warning  FailedScheduling        9m53s                   default-scheduler        0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
  Normal   Scheduled               9m51s                   default-scheduler        Successfully assigned stonepreston-cos/grafana-k8s-0 to juju-ab7ac4-1
  Normal   SuccessfulAttachVolume  9m50s                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-da4fa45b-6002-442c-9d74-c413b45ac192"
  Normal   Pulled                  9m37s                   kubelet                  Container image "rocks.canonical.com/cdk/jujusolutions/jujud-operator:2.9.35" already present on machine
  Normal   Created                 9m36s                   kubelet                  Created container charm-init
  Normal   Started                 9m36s                   kubelet                  Started container charm-init
  Normal   Pulling                 9m35s                   kubelet                  Pulling image "rocks.canonical.com/cdk/jujusolutions/charm-base:ubuntu-20.04"
  Normal   Pulled                  9m18s                   kubelet                  Successfully pulled image "rocks.canonical.com/cdk/jujusolutions/charm-base:ubuntu-20.04" in 17.60974236s
  Normal   Pulling                 9m16s                   kubelet                  Pulling image "registry.jujucharms.com/charm/h71m6jk2jeap1qu5lv9nv5mplqayr91q34lqp/grafana-image@sha256:cb9b47b4a53ae5f3da0fe40157e8eb20d0e120ae76da4dd9296f4b6c8ee62520"
  Normal   Pulled                  8m36s                   kubelet                  Successfully pulled image "registry.jujucharms.com/charm/h71m6jk2jeap1qu5lv9nv5mplqayr91q34lqp/grafana-image@sha256:cb9b47b4a53ae5f3da0fe40157e8eb20d0e120ae76da4dd9296f4b6c8ee62520" in 39.667093839s
  Normal   Created                 8m36s                   kubelet                  Created container grafana
  Normal   Started                 8m35s                   kubelet                  Started container grafana
  Normal   Pulling                 8m35s                   kubelet                  Pulling image "registry.jujucharms.com/charm/h71m6jk2jeap1qu5lv9nv5mplqayr91q34lqp/litestream-image@sha256:8ab4b042f6c84ec51cabd5a9caef7b5394080c88fa1d7c445f201780e39e8ea7"
  Normal   Pulled                  8m28s                   kubelet                  Successfully pulled image "registry.jujucharms.com/charm/h71m6jk2jeap1qu5lv9nv5mplqayr91q34lqp/litestream-image@sha256:8ab4b042f6c84ec51cabd5a9caef7b5394080c88fa1d7c445f201780e39e8ea7" in 7.040337768s
  Normal   Created                 8m28s                   kubelet                  Created container litestream
  Normal   Started                 8m28s                   kubelet                  Started container litestream
  Warning  Unhealthy               7m54s (x2 over 7m54s)   kubelet                  Readiness probe failed: HTTP probe failed with statuscode: 502
  Normal   Created                 7m35s (x3 over 9m16s)   kubelet                  Created container charm
  Normal   Pulled                  7m35s (x2 over 8m27s)   kubelet                  Container image "rocks.canonical.com/cdk/jujusolutions/charm-base:ubuntu-20.04" already present on machine
  Normal   Started                 7m34s (x3 over 9m16s)   kubelet                  Started container charm
  Warning  BackOff                 4m34s (x10 over 7m51s)  kubelet                  Back-off restarting failed container

Name:             modeloperator-6bcb5dc5f9-hcgrd
Namespace:        stonepreston-cos
Priority:         0
Service Account:  modeloperator
Node:             juju-ab7ac4-1/10.246.154.153
Start Time:       Mon, 17 Oct 2022 09:58:25 -0500
Labels:           model.juju.is/disable-webhook=true
                  operator.juju.is/name=modeloperator
                  operator.juju.is/target=model
                  pod-template-hash=6bcb5dc5f9
Annotations:      <none>
Status:           Running
IP:               192.168.102.135
IPs:
  IP:           192.168.102.135
Controlled By:  ReplicaSet/modeloperator-6bcb5dc5f9
Containers:
  juju-operator:
    Container ID:  containerd://4fe3543de3f1fd2358a7036420fe3fcc87e30593bf6ea2c403a270c0f1d94d6b
    Image:         rocks.canonical.com/cdk/jujusolutions/jujud-operator:2.9.35
    Image ID:      rocks.canonical.com/cdk/jujusolutions/jujud-operator@sha256:b5313b7611b82efd9ac96a0d3c8da5e30e87aa11ff3b17ab71ee2c6a68cba758
    Port:          17071/TCP
    Host Port:     0/TCP
    Command:
      /bin/sh
    Args:
      -c
      export JUJU_DATA_DIR=/var/lib/juju
      export JUJU_TOOLS_DIR=$JUJU_DATA_DIR/tools

      mkdir -p $JUJU_TOOLS_DIR
      cp /opt/jujud $JUJU_TOOLS_DIR/jujud

      $JUJU_TOOLS_DIR/jujud model --model-uuid=82239d5a-57f7-40a4-88e9-a3727951ff02

    State:          Running
      Started:      Mon, 17 Oct 2022 09:58:52 -0500
    Ready:          True
    Restart Count:  0
    Environment:
      HTTP_PORT:          17071
      SERVICE_NAME:       modeloperator
      SERVICE_NAMESPACE:  stonepreston-cos
    Mounts:
      /var/lib/juju/agents/model-82239d5a-57f7-40a4-88e9-a3727951ff02/template-agent.conf from modeloperator (rw,path="template-agent.conf")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jhmvb (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  modeloperator:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      modeloperator
    Optional:  false
  kube-api-access-jhmvb:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  10m    default-scheduler  Successfully assigned stonepreston-cos/modeloperator-6bcb5dc5f9-hcgrd to juju-ab7ac4-1
  Normal  Pulling    10m    kubelet            Pulling image "rocks.canonical.com/cdk/jujusolutions/jujud-operator:2.9.35"
  Normal  Pulled     9m39s  kubelet            Successfully pulled image "rocks.canonical.com/cdk/jujusolutions/jujud-operator:2.9.35" in 23.822076221s
  Normal  Created    9m37s  kubelet            Created container juju-operator
  Normal  Started    9m37s  kubelet            Started container juju-operator

# juju debug-log
juju debug-log
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 10:25:46 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 10:25:46 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.uniter.charm detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
unit-grafana-k8s-0: 10:26:16 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM
unit-grafana-k8s-0: 10:25:46 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 10:25:46 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 10:25:46 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.migrationminion migration phase is now: NONE
^C
stone@stone-desktop:~/ck/charm-kube-ovn$ juju debug-log --replay
controller-0: 09:58:23 INFO juju.worker.apicaller [82239d] "machine-0" successfully connected to "localhost:17070"
controller-0: 09:58:23 INFO juju.worker.logforwarder config change - log forwarding not enabled
controller-0: 09:58:23 INFO juju.worker.logger logger worker started
controller-0: 09:58:23 INFO juju.worker.pruner.statushistory status history config: max age: 336h0m0s, max collection size 5120M for stonepreston-cos (82239d5a-57f7-40a4-88e9-a3727951ff02)
controller-0: 09:58:23 INFO juju.worker.pruner.action status history config: max age: 336h0m0s, max collection size 5120M for stonepreston-cos (82239d5a-57f7-40a4-88e9-a3727951ff02)
controller-0: 09:58:32 INFO juju.worker.caasapplicationprovisioner.runner start "grafana-k8s"
model-82239d5a-57f7-40a4-88e9-a3727951ff02: 09:58:53 INFO juju.worker.caasupgrader abort check blocked until version event received
model-82239d5a-57f7-40a4-88e9-a3727951ff02: 09:58:53 INFO juju.worker.caasupgrader unblocking abort check
model-82239d5a-57f7-40a4-88e9-a3727951ff02: 09:58:55 INFO juju.worker.muxhttpserver starting http server on [::]:17071
model-82239d5a-57f7-40a4-88e9-a3727951ff02: 09:58:55 INFO juju.worker.caasadmission ensuring model k8s webhook configurations
unit-grafana-k8s-0: 09:59:13 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 09:59:13 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 09:59:13 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.migrationminion migration phase is now: NONE
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 09:59:13 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 09:59:13 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 09:59:13 INFO juju.worker.uniter.charm downloading ch:amd64/focal/grafana-k8s-45 from API server
unit-grafana-k8s-0: 09:59:13 INFO juju.downloader downloading from ch:amd64/focal/grafana-k8s-45
unit-grafana-k8s-0: 09:59:13 INFO juju.downloader download complete ("ch:amd64/focal/grafana-k8s-45")
unit-grafana-k8s-0: 09:59:14 INFO juju.downloader download verified ("ch:amd64/focal/grafana-k8s-45")
unit-grafana-k8s-0: 09:59:43 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM
unit-grafana-k8s-0: 10:00:02 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 10:00:02 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 10:00:02 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 10:00:02 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 10:00:02 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:00:02 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:00:03 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:00:03 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:00:03 INFO juju.worker.migrationminion migration phase is now: NONE
unit-grafana-k8s-0: 10:00:03 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 10:00:03 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 10:00:03 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 10:00:03 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 10:00:03 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 10:00:03 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 10:00:03 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 10:00:03 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 10:00:03 INFO juju.worker.uniter.charm detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
unit-grafana-k8s-0: 10:00:32 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM
unit-grafana-k8s-0: 10:00:55 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 10:00:55 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 10:00:55 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.migrationminion migration phase is now: NONE
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 10:00:55 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 10:00:55 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 10:00:55 INFO juju.worker.uniter.charm detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
unit-grafana-k8s-0: 10:01:25 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM
unit-grafana-k8s-0: 10:01:58 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 10:01:58 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 10:01:58 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.migrationminion migration phase is now: NONE
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 10:01:58 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 10:01:58 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 10:01:58 INFO juju.worker.uniter.charm detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
unit-grafana-k8s-0: 10:02:28 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM
unit-grafana-k8s-0: 10:03:19 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 10:03:19 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 10:03:19 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.migrationminion migration phase is now: NONE
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 10:03:19 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 10:03:19 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 10:03:19 INFO juju.worker.uniter.charm detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
unit-grafana-k8s-0: 10:03:49 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM
unit-grafana-k8s-0: 10:05:23 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 10:05:23 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 10:05:23 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:05:23 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.migrationminion migration phase is now: NONE
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 10:05:23 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 10:05:23 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 10:05:23 INFO juju.worker.uniter.charm detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
unit-grafana-k8s-0: 10:05:53 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM
unit-grafana-k8s-0: 10:08:45 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 10:08:45 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 10:08:45 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.migrationminion migration phase is now: NONE
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 10:08:45 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 10:08:45 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 10:08:45 INFO juju.worker.uniter.charm detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
unit-grafana-k8s-0: 10:09:15 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM
unit-grafana-k8s-0: 10:14:27 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 10:14:27 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 10:14:27 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:14:27 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.migrationminion migration phase is now: NONE
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 10:14:27 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 10:14:27 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 10:14:27 INFO juju.worker.uniter.charm detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
unit-grafana-k8s-0: 10:14:57 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM
unit-grafana-k8s-0: 10:20:06 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 10:20:06 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 10:20:06 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.migrationminion migration phase is now: NONE
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 10:20:06 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 10:20:06 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 10:20:06 INFO juju.worker.uniter.charm detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
unit-grafana-k8s-0: 10:20:36 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM
unit-grafana-k8s-0: 10:25:46 INFO juju.cmd running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
unit-grafana-k8s-0: 10:25:46 INFO juju.cmd.containeragent.unit start "unit"
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.upgradesteps upgrade steps for 2.9.35 have already been run.
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.probehttpserver starting http server on [::]:65301
unit-grafana-k8s-0: 10:25:46 INFO juju.api connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.apicaller [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.migrationminion migration phase is now: NONE
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.logger logger worker started
unit-grafana-k8s-0: 10:25:46 WARNING juju.worker.proxyupdater unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.caasupgrader abort check blocked until version event received
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.caasupgrader unblocking abort check
unit-grafana-k8s-0: 10:25:46 INFO juju.agent.tools ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.leadership grafana-k8s/0 promoted to leadership of grafana-k8s
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.uniter unit "grafana-k8s/0" started
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.uniter resuming charm install
unit-grafana-k8s-0: 10:25:46 INFO juju.worker.uniter.charm detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
unit-grafana-k8s-0: 10:26:16 INFO juju.worker.caasunitterminationworker terminating due to SIGTERM

# kubectl log of charm container
kubectl logs grafana-k8s-0 -c charm -n stonepreston-cos
2022-10-17T15:25:46.086Z [pebble] HTTP API server listening on ":38812".
2022-10-17T15:25:46.086Z [pebble] Started daemon.
2022-10-17T15:25:46.146Z [pebble] POST /v1/services 59.993765ms 202
2022-10-17T15:25:46.147Z [pebble] Started default services with change 19.
2022-10-17T15:25:46.174Z [pebble] Service "container-agent" starting: /charm/bin/containeragent unit --data-dir /var/lib/juju --append-env "PATH=$PATH:/charm/bin" --show-log --charm-modified-version 0
2022-10-17T15:25:46.228Z [container-agent] 2022-10-17 15:25:46 INFO juju.cmd supercommand.go:56 running containerAgent [2.9.35 da3416008ea4ce7851a4c967ae191a0044917024 gc go1.19.2]
2022-10-17T15:25:46.228Z [container-agent] starting containeragent unit command
2022-10-17T15:25:46.228Z [container-agent] containeragent unit "unit-grafana-k8s-0" start (2.9.35 [gc])
2022-10-17T15:25:46.229Z [container-agent] 2022-10-17 15:25:46 INFO juju.cmd.containeragent.unit runner.go:556 start "unit"
2022-10-17T15:25:46.229Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.upgradesteps worker.go:60 upgrade steps for 2.9.35 have already been run.
2022-10-17T15:25:46.230Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.probehttpserver server.go:157 starting http server on [::]:65301
2022-10-17T15:25:46.247Z [container-agent] 2022-10-17 15:25:46 INFO juju.api apiclient.go:688 connection established to "wss://10.246.154.88:17070/model/82239d5a-57f7-40a4-88e9-a3727951ff02/api"
2022-10-17T15:25:46.261Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.apicaller connect.go:163 [82239d] "unit-grafana-k8s-0" successfully connected to "10.246.154.88:17070"
2022-10-17T15:25:46.283Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.migrationminion worker.go:142 migration phase is now: NONE
2022-10-17T15:25:46.290Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.logger logger.go:120 logger worker started
2022-10-17T15:25:46.297Z [container-agent] 2022-10-17 15:25:46 WARNING juju.worker.proxyupdater proxyupdater.go:282 unable to set snap core settings [proxy.http= proxy.https= proxy.store=]: exec: "snap": executable file not found in $PATH, output: ""
2022-10-17T15:25:46.313Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.caasupgrader upgrader.go:113 abort check blocked until version event received
2022-10-17T15:25:46.313Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.caasupgrader upgrader.go:119 unblocking abort check
2022-10-17T15:25:46.314Z [container-agent] 2022-10-17 15:25:46 INFO juju.agent.tools symlinks.go:20 ensure jujuc symlinks in /var/lib/juju/tools/unit-grafana-k8s-0
2022-10-17T15:25:46.334Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.leadership tracker.go:194 grafana-k8s/0 promoted to leadership of grafana-k8s
2022-10-17T15:25:46.336Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.uniter uniter.go:326 unit "grafana-k8s/0" started
2022-10-17T15:25:46.338Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.uniter uniter.go:631 resuming charm install
2022-10-17T15:25:46.412Z [container-agent] 2022-10-17 15:25:46 INFO juju.worker.uniter.charm manifest_deployer.go:182 detected interrupted deploy of charm "ch:amd64/focal/grafana-k8s-45"
2022-10-17T15:25:56.120Z [pebble] Check "readiness" failure 1 (threshold 3): received non-20x status code 418
2022-10-17T15:26:06.118Z [pebble] Check "readiness" failure 2 (threshold 3): received non-20x status code 418
2022-10-17T15:26:16.119Z [pebble] Check "readiness" failure 3 (threshold 3): received non-20x status code 418
2022-10-17T15:26:16.119Z [pebble] Check "readiness" failure threshold 3 hit, triggering action
2022-10-17T15:26:16.119Z [pebble] Service "container-agent" on-check-failure action is "shutdown", triggering server exit
2022-10-17T15:26:16.119Z [pebble] Server exiting!
2022-10-17T15:26:16.147Z [pebble] Stopping all running services.
2022-10-17T15:26:16.249Z [container-agent] 2022-10-17 15:26:16 INFO juju.worker.caasunitterminationworker worker.go:82 terminating due to SIGTERM
2022-10-17T15:26:21.271Z [pebble] Service "container-agent" stopped

Additional context

No response

gruyaume commented 2 years ago

I am experiencing the same issue on my local machine:

The charm can get back to active-idle if I delete the pod.

Loki is also stuck in crashloop backoff

sed-i commented 2 years ago

Maybe related: https://github.com/canonical/operator/issues/847

simskij commented 2 years ago

This is a Juju error that should have been rolled back in a patch release. Could you please try again and verify whether that is the case? Seems to work in 2.9.37, as well as 3.1.

stonepreston commented 1 year ago

@simskij I have tried with a 2.9.37 controller in our VSphere environement. It does get passed the crash loop back off. But now the charm gets stuck here:

juju status
Model             Controller       Cloud/Region                       Version  SLA          Timestamp
stonepreston-cos  stonepreston-vs  stonepreston-vs-k8s-cloud/default  2.9.37   unsupported  14:53:33-06:00

App          Version  Status   Scale  Charm        Channel  Rev  Address        Exposed  Message
grafana-k8s  9.2.1    waiting      1  grafana-k8s  edge      52  10.152.183.99  no       installing agent

Unit            Workload  Agent  Address       Ports  Message
grafana-k8s/0*  unknown   idle   192.168.0.21   

Juju debug log:

Grafana container log

Litestream container log:

kubectl logs grafana-k8s-0 -n stonepreston-cos -c litestream
2022-11-16T20:46:02.058Z [pebble] HTTP API server listening on ":38814".
2022-11-16T20:46:02.058Z [pebble] Started daemon.

I can close this issue and open a new one if youd like, since this seems to no longer be related to the crash loop?

sed-i commented 1 year ago

@stonepreston I wonder if you hit a resource limit. Mind checking

microk8s kubectl get pods/grafana-k8s-0 -n stonepreston-cos -o=jsonpath='{.status}' | jq
stonepreston commented 1 year ago

@sed-i Here is the output of the status:

kubectl get pods/grafana-k8s-0 -n stonepreston-cos -o=jsonpath='{.status}' | jq
{
  "conditions": [
    {
      "lastProbeTime": null,
      "lastTransitionTime": "2022-11-16T20:46:01Z",
      "status": "True",
      "type": "Initialized"
    },
    {
      "lastProbeTime": null,
      "lastTransitionTime": "2022-11-16T20:47:14Z",
      "status": "True",
      "type": "Ready"
    },
    {
      "lastProbeTime": null,
      "lastTransitionTime": "2022-11-16T20:47:14Z",
      "status": "True",
      "type": "ContainersReady"
    },
    {
      "lastProbeTime": null,
      "lastTransitionTime": "2022-11-16T20:45:55Z",
      "status": "True",
      "type": "PodScheduled"
    }
  ],
  "containerStatuses": [
    {
      "containerID": "containerd://48fc9fc494fc9d09200015bc37e05c801eb3ddcd70ab043b777b30f87a3ef2fc",
      "image": "rocks.canonical.com/cdk/jujusolutions/charm-base:ubuntu-20.04",
      "imageID": "rocks.canonical.com/cdk/jujusolutions/charm-base@sha256:5ccefd1a92d63baa961680c22a47e01213c99e9c06280c732a1910a5c126f2d2",
      "lastState": {},
      "name": "charm",
      "ready": true,
      "restartCount": 0,
      "started": true,
      "state": {
        "running": {
          "startedAt": "2022-11-16T20:46:01Z"
        }
      }
    },
    {
      "containerID": "containerd://666b62c11fc4d977cf52d8f9b405af40957a4633762d1c0d350999221a816145",
      "image": "sha256:3f60358b5ba29becbfeb620dae8832f6bb93563a0fe83890f5c8c2c7e77f8e5f",
      "imageID": "registry.jujucharms.com/charm/h71m6jk2jeap1qu5lv9nv5mplqayr91q34lqp/grafana-image@sha256:1a1d900ee938adeaaa167d4f7cd720129762e481c29eb8021d42d23a9332d506",
      "lastState": {},
      "name": "grafana",
      "ready": true,
      "restartCount": 0,
      "started": true,
      "state": {
        "running": {
          "startedAt": "2022-11-16T20:46:01Z"
        }
      }
    },
    {
      "containerID": "containerd://0946f830795f62185b937efe2173ae974fde8aae5d16d582849ac377880b3f22",
      "image": "sha256:810676c15a7137f5ade23a3f589ee683063723152bd9aa51d371356f3bce83db",
      "imageID": "registry.jujucharms.com/charm/h71m6jk2jeap1qu5lv9nv5mplqayr91q34lqp/litestream-image@sha256:8ab4b042f6c84ec51cabd5a9caef7b5394080c88fa1d7c445f201780e39e8ea7",
      "lastState": {},
      "name": "litestream",
      "ready": true,
      "restartCount": 0,
      "started": true,
      "state": {
        "running": {
          "startedAt": "2022-11-16T20:46:02Z"
        }
      }
    }
  ],
  "hostIP": "10.246.154.193",
  "initContainerStatuses": [
    {
      "containerID": "containerd://3e6232f94b4ebed1b9dae3bfc6e2a43b367a2dbd9657e6c110604005325e18f1",
      "image": "rocks.canonical.com/cdk/jujusolutions/jujud-operator:2.9.37",
      "imageID": "rocks.canonical.com/cdk/jujusolutions/jujud-operator@sha256:5a8797ceec40324721854ad7f96fdfdcd32a9b738b58c2c25e33dc81effde296",
      "lastState": {},
      "name": "charm-init",
      "ready": true,
      "restartCount": 0,
      "state": {
        "terminated": {
          "containerID": "containerd://3e6232f94b4ebed1b9dae3bfc6e2a43b367a2dbd9657e6c110604005325e18f1",
          "exitCode": 0,
          "finishedAt": "2022-11-16T20:46:00Z",
          "reason": "Completed",
          "startedAt": "2022-11-16T20:46:00Z"
        }
      }
    }
  ],
  "phase": "Running",
  "podIP": "192.168.0.21",
  "podIPs": [
    {
      "ip": "192.168.0.21"
    }
  ],
  "qosClass": "Burstable",
  "startTime": "2022-11-16T20:45:55Z"
}
sed-i commented 1 year ago

Not a resource limit then. Thanks for checking.