kubecost / cluster-turndown

Automated turndown of Kubernetes clusters on specific schedules.
Apache License 2.0
259 stars 23 forks source link

Controller does not pick up a re-applied schedule. #21

Closed chap-dr closed 4 years ago

chap-dr commented 4 years ago

Applied the example schedule, got an error saying the date was in the past (imo it should not matter as the repete was set to daily)

Updating with a future date, ran kubectl apply again without the controller picking up the change. removing and then adding the schedule made the controller pick up on the change.

mbolt35 commented 4 years ago

I've added a new issue for ignoring date on repeat: https://github.com/kubecost/cluster-turndown/issues/24

The other issue you're referring to is the way we handle failed (and completed) schedules. Whenever a schedule fails, we do not immediately delete the resource.

$ kubectl get tds
NAME               STATE            NEXT TURNDOWN   NEXT TURN UP
example-schedule   ScheduleFailed   <no value>      <no value>

Likewise, when a non-repeating schedule completes, the resource is not immediately deleted. These are mainly to show a status similar to a pod failing to locate an image. The resources will eventually be cleaned up (there's a 30 minute interval cleanup job that runs).

With the example we provide, example-schedule, if it fails and you try to re-apply, it will attempt to update the resource, which is not currently supported. We have a note in our docs regarding update via kubectl edit tds ... but not anything for example-schedule.

I agree that this is a bit annoying, and we plan on addressing the updating resource issue soon. In the mean time, I've updated the documentation surrounding the example-schedule.yaml interaction.

Thanks again for all your input!

chap-dr commented 4 years ago

No problem at all, great to see the quick response. 👍