Closed davhdavh closed 9 months ago
could also use this field: https://github.com/GalleyBytes/terraform-operator/issues/123 e.g. setting it to 0 would force a retry.
I'm going to go with an annotation to the resource. When the annotation is observed, the controller will retrigger the pipeline. The value of the annotation will specify if "setup" should be triggered by supplying some convention. Like your example above, it'll look something like the following:
Retry by adding an annotation:
tf.galleybyte.com/retry: my reason
Any value will work. Retry by adding a timestamp:
tf.galleybyte.com/retry: 20231023T0904
Via convention, a retry can be triggered from the setup task:
tf.galleybyte.com/retry: 20231023T0904/setup
Anytime the annotation is changed, terraform-operator controller will start the pipeine over.
https://kubernetes.io/docs/reference/labels-annotations-taints/#change-cause I guess I wasn't clear about the well-known label part
I'd like to be consistent with k8s-isms. How would the well-known label trigger a retry?
Maybe a better question... is adding/updating an annotation to trigger a retry the same as using a well known label to trigger a retry? Or are there benefits to using the well-known label?
There is no logic associated with a well-known annotation, atleast for this one that I am aware of. But the benefits would be that all other tools would know the semantics of this label and can thus represent it in UI or tooling.
Currently the way to retry a run is the delete the pod, but this is a problem if you want to automate retries a bit more.
Could you please add support for setting e.g.
.metadata.labels."kubernetes.io/change-cause" = "Retry from MyRetrierApp, attempt 42"
on the Terraform resource would force a rerun as if the script was changed or pods deleted.kubernetes.io/change-cause
is a well-known label, but it could also just be a custom label or custom metadata field.