Open dirrao opened 7 months ago
I'm having this problem in some randoms tasks and I'm not able to find out what is happening with these tasks. Solving this issue will help a lot. I'm using kubernetes executor, airflow 2.8.3 and apache-airflow-providers-cncf-kubernetes | 8.0.0 . Everything deployed in eks k8s 1.27
@jedcunningham / @hussein-awala Can you add your thoughts on this? is it good idea to write the log message from scheduler to remote storage? How can we generalize this to all remote storages?
Description
Right now, when the tasks fail due to pod launch failures or the pod is stuck in the pending phase, then the task logs from the UI are empty. It is very inconvenient for airflow consumers to debug it. They might not have access to the scheduler logs. I believe we can push these failure reasons from the Kubernetes executor to task attempt logs (remote). So, that airflow consumers can able to see task failure reasons from the UI.
Use case/motivation
Right now, when the tasks fail due to pod launch failures or the pod is stuck in the pending phase, then the task logs from the UI are empty. It is very inconvenient for airflow consumers to debug it.
Related issues
No response
Are you willing to submit a PR?
Code of Conduct