We on the CAPI team have started running Eirini along with cf-deployment in our CI pipeline. As part of the pipeline, we also have regular cert rotation so that we never get into a situation where our certs have expired.
What we observe is that after certs are rotated, we can no longer retrieve app logs until we restart the eirini-loggregator-fluentd pods. The new certs do get properly propagated to the eirini-loggregator-fluentd pods system running in k8s, however both fluentd and loggregator-agentkeep merrily running along with the old certs already loaded into memory. In a bosh-deployed world, when configuration changes occur, the jobs are stopped and started, so they will pick up the new config. However, since the eirini-loggregator-fluentd pods exist outside of the bosh lifecycle, they are not restarted.
One option I see would be to add a command to the configure-eirini-bosh errand that will restart the pods. That would suit us just fine in our CI, since we don't care about logs until we run tests. However, for folks hoping to run this in production, presumably log downtime between then the doppler instance group gets updated and the post-deploy errand is probably unacceptable.
Maybe there's a more k8s-native way to tell the pods to be rolling-restarted when new certs show up? Can you trigger things on changing secrets, for example?
Hello Eirini friends!
We on the CAPI team have started running Eirini along with cf-deployment in our CI pipeline. As part of the pipeline, we also have regular cert rotation so that we never get into a situation where our certs have expired.
What we observe is that after certs are rotated, we can no longer retrieve app logs until we restart the
eirini-loggregator-fluentd
pods. The new certs do get properly propagated to theeirini-loggregator-fluentd
pods system running in k8s, however bothfluentd
andloggregator-agent
keep merrily running along with the old certs already loaded into memory. In a bosh-deployed world, when configuration changes occur, the jobs are stopped and started, so they will pick up the new config. However, since theeirini-loggregator-fluentd
pods exist outside of the bosh lifecycle, they are not restarted.One option I see would be to add a command to the
configure-eirini-bosh
errand that will restart the pods. That would suit us just fine in our CI, since we don't care about logs until we run tests. However, for folks hoping to run this in production, presumably log downtime between then thedoppler
instance group gets updated and the post-deploy errand is probably unacceptable.Maybe there's a more k8s-native way to tell the pods to be rolling-restarted when new certs show up? Can you trigger things on changing secrets, for example?