Closed Abuelodelanada closed 1 year ago
Sounds like we're missing (the equivalent of) a pass through _configure
, which makes sure services are running.
In grafana agent, add_layer
is called only in on_pebble_ready
:
This is no longer reproducable.
Bug Description
Let's say we need to emulate the dead of a POD that uses
LokiPushApiConsumer
(grafana-agent-k8s) To do that we will delete the pod runningkubectl delete pod...
Juju will re-create the POD, but we will get an stack trace in the log, that will produce an error in some integration tests.
To Reproduce
juju add model paka
charmcraft pack
(grafana-agent-k8s-operator with the last version of LokiPushApi lib)juju deploy ./grafana-agent-k8s_ubuntu-20.04-amd64.charm --resource agent-image=grafana/agent:v0.20.1
microk8s.kubectl delete pod -n paka grafana-agent-k8s-0
juju debug-log
Environment
juju: 2.9.29 microk8s: microk8s.kubectl delete pod -n paka grafana-agent-k8s-0 grafana-agent: https://github.com/canonical/grafana-agent-k8s-operator/pull/44
Relevant log output
Additional context
As far as I understand the problem starts in the method
_on_lifecycle_event
when we execute:self.on.loki_push_api_endpoint_joined.emit()
. This method is executed onupgrade_charm
.This event is observed in the charm with the method:
_on_loki_push_api_endpoint_joined
. This method executesself._update_config(event)
:We can avoid the stacktrace by catching the
APIError
exception:But the question is: Should we emit
loki_push_api_endpoint_joined
onupgrade_charm
event?