Open thisthat opened 4 months ago
INITIAL THOUGHTS:
There are 2 questions that come to my mind when looking deeper into this ticket.
Do we want to attach k8s Events to the traces only when the deployment fails? This means only when the WorkloadDeploy
phase fails.
Does it make sense to attach the k8s Events to the app trace? I would suggest adding the information to span representing the failed phase (same as the information is added to the pre/post-deployment phases in the case the phases failed). I would suggest attach the k8s Events the the span representing the WorkloadDeploy
phase
List of Events that are available during deployment of workloads (Pod, Deployment, ReplicaSet, StatefulSet, DaemonSet):
Normal events:
Warning events:
Normal events:
Warning events:
Normal events:
Warning events:
Normal events:
Warning events:
Normal events:
Warning events:
Goal
K8s-generated events about the Application deployment should be attached to the trace generated by Keptn.
Details
Keptn provides a unified trace that describes what's happening in your K8s cluster when users deploy applications on it. If something goes off, Keptn doesn't provide much information besides the trace being terminated with an error state. It would be better to have also K8s Event information enclosed to the failed trace to debug and discover the root cause directly in a single source of truth. OTel has already support for Events, which makes it a perfect fit for us.
Since Keptn starts the KeptnAppVersion span before any K8s controller can take a Workload CR and ends it after K8s controllers finish handling a Workload CR, that Span makes the perfect fit to include all events.
Acceptance Criteria
firstTimestamp
field)involvedObject
andmetadata
fields)message
,reason
, andtype
are added as attributes.DoD